truncate text in django - django

I have a blog made by django. I want django to be able to read my post content and after django reaches to a specific word which I have already specified for Django, truncate the content.
for example this is a post content :
*Once upon a time, there lived a shepherd boy who was bored watching his flock of sheep on the hill. To amuse himself, he shouted, “Wolf! Wolf! The sheep are being chased by the wolf!” The villagers came running to help the boy and save the sheep. They found nothing and the boy just laughed looking at their angry faces.
[django_Truncate]
“Don’t cry ‘wolf’ when there’s no wolf boy!”, they said angrily and left. The boy just laughed at them.
After a while, he got bored and cried ‘wolf!’ again, fooling the villagers a second time. The angry villagers warned the boy a second time and left. The boy continued watching the flock. After a while, he saw a real wolf and cried loudly, “Wolf! Please help! The wolf is chasing the sheep. Help!”*
I want django to read it and when it reaches to [django_Truncate], truncate it.
so, the paragraph before [django_Truncate] will be displayed and the remainder will not be displayed.
is something like this possible?

You will have to create a custom template filter which takes the value and splits it on your desired string.
If you don't need to change the value to split on you can create it like this:
def split_on_string(value):
return value.split("[django_Truncate")[0]
And then you can use it like this:
{{ mytext|split_on_string }}
If you want the string to split on to be a dynamic value you can adapt this function to take an argument, as is explained at the linked page. The page also explains where to create the custom filter and how to make it usable in a template.

Related

how can I scrap specific words in the webnews?

I wanna scrap the name of writer.
But the structure is too different among the news websites to get that.
For instance, the name is written by classname = reporter, author or writer etc..
I think the best way to get that is searching for reporter in the boby of webpage by useing "Ctrl + f". If there is it, it maybe appear first.
and then get the word that is in front of the word reporter that I searched.
But I don't know how I can write the code by python and that it can be.
please give me the draft or link that I can refer.

Did Google recently change UI to grant public access to bucket objects in Google Cloud Platform?

Ok, I've been using Google Cloud Platform for some video files
that are are viewable from a few web pages I built. I started this two or three years ago, and I have loved it.
But, now it appears they broke it, without warning/telling us.
So, in the platform's console, yesterday (for first time since a month or two ago), I uploaded another video...that part went fine. But, when it came time to click on the checkbox to grant public access, the checkbox is now GONE. (The only part of the UI that looks NEW,
is the column labeled 'public access'. Instead of just a check-box to toggle on or off, now there's a yellow-triangle and an oval-shaped symbol. Once or twice, I was able to get a popup to appear saying 'edit permission', but that quickly led into the weeds.)
After half an hour or so, I finally thought to call platform support, and explained my problem to a guy (with just enough Australian accent to cause me to have to ask for repeats quite a bit...sigh).
So, they logged me a case# and I suggested I was headed to bed, and asked that we now use email (rather than the phone) to continue. Just before bed, I got the case#, and a query about whether it was ok for them to 'change my console'. I replied to the email, saying yes, and went to bed.)
So that was last nite. This morning, re-reading their email, it seems to say that it could be 3 or 4 days, before a more technical person will contact me.
Some re-reading their platform-console docs, I'm now GUESSING that maybe they just nuked the public-access checkbox, and that now I'm supposed to spend hours (days?) taking a short-course on IAM-permmissions, and learn some new long-winded method.
(This whole mess could have been avoided, if they'd just emailed us an informational warning of this UI-change, with some new 5-step short list or tutorial of how to learn to use their 'new, much more complicated,
way to specify public-access'. From where I sit, this change is equivalent to Microsoft saying 'instead of that checkbox, you'll need to learn to make registry edits...see our platform docs on how to do that.)
Right now, I have more than half-a-mind, to seriously consider bailing out of Google's cloud storage, and consider switching to one of the others. But, I'm not quite ready yet, to make that jump (from the frying-pan into the fire?). :^)
Anyone else been down this road? What meeting did I miss? Is there a quicker way out of my dilemma, than just waiting for Google-support to get back to me?
It looks like the change you mention was introduced on July 18th. I’m not sure why, but judging by the change description, it looks like it is aimed to avoid accidentally making sensitive information public: “Objects can no longer be made public through one-click actions”.
You can find the procedure to make a single object public here. It can be achieved through the Console and won't take you more than a few minutes. Once the object is shared publicly, you can use the icon in the “public access” column to get the URL for the object.
You can also make all the content of a bucket public using a similar approach.
When you upload your objects into a bucket, you can upload with ACL as publicRead
and all your objects will have public URL.
public async Task UploadObjectAsync(string bucketName, string objectName, Stream source, string contentType = "image/jpeg")
{
var storage = StorageClient.Create();
await storage.UploadObjectAsync(bucketName, objectName, contentType, source, new UploadObjectOptions()
{
PredefinedAcl = PredefinedObjectAcl.PublicRead
});
}
As I suspected. (I still wonder if they even considered sending an email to each registered/existing customer.)
Ok, yes, (finally, after some practices), this solves it! Thx for those two answers.
(But in my view, their UI-change is still a work-in-progress) So, I have a SUGGESTION for ya, Google. Once one is into the permissions-edit-dialog, and remembers to do an 'add', there's are the 3 fields. The first and third are fine...drop-downs with choices. But that middle entry needs work...how about doing something like an auto-guess-ahead...initialize the field to a suggested value of 'allUsers', so we don't have to remember what to type and how to spell it, or something along those lines.
EDIT: [Actually, it ought to be possible to make that field a drop-down-list choice, with 'allUsers' as one suggested value, and a second value as a text-entry (for specific user-names, etc).]
Unfortunately, 8 Ball Pool it is not possible to list files Google Hangouts without access to the Omegle bucket that contains them. This is due to the current design of the library, which requires that the bucket is loaded before listing its files.

Needing advanced ID3 tag handling for specific situation

My situation is probably quite rare and complex, so I'll explain it in detail.
Many years ago, I put together a hand-selected collection of MP3s, which ended up taking a month or so and is now at 8000 songs. All of these songs were manually ID3 tagged, which took me forever. Unfortunately, I had a strange tagging philosophy. For songs that featured multiple artists, I would put the features in the Artist field, rather than the Title field. Here's what I mean:
What I have: OB O'Brien (ft. Drake) - 2 On/Thotful
What every normal person has: OB O'Brien - 2 On/Thotful (ft. Drake)
Is there any software or script that handles ID3 tags that will let me perform an advanced renaming like this? Basically, I want to batch handle my MP3s so that if "(ft. *)" is found in the Artist field, it is removed and instead appended to the end of the Title field. Possible?
Yes, this is possible. Please try Mp3Tag.
In the associated forum you will find many examples how to add own "Actions" that exactly do what you want:
Check a specific tag (like Artist or AlbumArtist) for a specific string (like "ft.")
If found, do something with the catch, like moving this part to another tag
You can even use Regular Expressions, like
Action: Guess values
Source format:
%ARTIST%$regexp(%TITLE%,'^(.+?)\s+[[({<]?\s*(?:featuring|feat\.?|ft\.?)\s*([^])}>]+)\>\s*[])}>]?(.*)$',' feat. $2$3+++$1',1)
... or ...
%ARTIST%$regexp(%TITLE%,'^(.+?)\s+[[({<]?\s*(?:featuring|feat\.?|ft\.?)\s*([^])}>]+)[])}>]?(.*)$',' feat. $2$3+++$1',1)
Guessing pattern:
%ARTIST%+++%TITLE%

Feedparser not seeing description element value in RSS feed

Feedparser parses the bulk of this feed just fine, but for some reason will not return a value for the description element.
Feed: http://bigpopfunpodcast.libsyn.com/rss
The code I'm testing with:
show = feedparser.parse('http://bigpopfunpodcast.libsyn.com/rss')
if 'description' in show.feed:
description = show.feed.description
else:
description = 'No description found'
This code returns an empty string. When I print the contents of show to see the results of the parse, there’s no description element. But when I view the RSS data myself, the description element is clearly there. The code should return:
"Big Pop Fun with Tom Wilson is a podcast dedicated to pop, and the lives lived within its gentle influence, or tightening grasp, or soul crushing evil claws, depending on who you are. Unapologetically big and fun, Tom has lived a life that the world has seen through pop cultural lenses, and you can keep them on or take them off, but he's going to keep going anyway. Enjoy!"
Feed: http://cashinginwithtjmiller.libsyn.com/rss
The code returns the description on this feed, but I don't see a difference the two feeds that explains the inconsistency.
I’m not able to find an explaination, after much searching around. Does anyone know a solution to this? Thanks in advance.

Graph API: Number of Comments for Posts Are Inconsistent Among Various API Calls

Hello Graph API experts,
When you call /[post_id , the result contains "comments" field which has "count" field that is supposed to have the total number of comments for this particular post.
Now, if you call /[post_id]/comments , you get the actual comment data, one by one.
The problem I am facing is that, when I compare the "comments.count" field's value and the number of all of the actual comment data returned, they are different.
What's even worse, if you then look at the same post on Facebook.com's Timeline where you can see the number of comments for that post (i.e. "view all * comments" link), this number is also different from the "comments.count" field value.
And this is not only happening to one post, but to many of them - I observe this tend to happen more to posts with more than 100 comments (I actually counted all the comments on Timeline, and it matched the number of the actual comment data returned from /[post_id]/comments API call).
Is this a normal API behaviour? Which number should I or would you trust if this is the way it is?
ok, when you looking some facebook comment counts on some timeline posts, you woulld see that count for ex. 16 comments, and when you try to count comments manually on the post you may see it's looking 15 comments, so where is it that missing comments ? is that a wrong count by facebook ? no not actually, it's because, some people changing profile privacies as like don't show my comments people who aren't my friends, or we haven't any mutual friends, etc. it's because you cannot get these privatized comments from graph api, but these comments aren't excluding in total count. So what's the solution, just be sure get all the data correctly what facebook provide you. And compare it, how many comments looking like missing, and show missing counts as private comments count in your application. I think is much better.
Welcome to the world of Facebook API programming. Yes, this is normal (but apparently not desired) API behavior. This is one of the inconsistencies we're faced with when programming around their API. CBroe is probably correct in his comment above, it is data inconsistencies between servers in their API cluster.
in addition to this there are problems with pagination, you can use the offset + limit parameters to say how much data you want and from where to take it, if you deal with number of posts, you can say offset=0 and limit=50 and it'll work, but then if you try offset=100 and limit=50 it might return empty data, but then try offset=100 and limit=100 and it'll return 100 posts.
the api is just buggy and full of inconsistencies which don't seem to have any way to solve them.
I think we got oversold on the opengraph, I don't think it's what facebook told us it would be and I'm starting to feel the burn from selling that to my boss and finding out that I perhaps can't deliver :(