Django1.1 file based session backend multi-threaded solution - django

I read django.contrib.sessions.backend.file today, in the save method of SessionStore there is something as the following that's used to achieve multi-threaded saving integrity:
output_file_fd, output_file_name = tempfile.mkstemp(dir=dir,
prefix=prefix + '_out_')
renamed = False
try:
try:
os.write(output_file_fd, self.encode(session_data))
finally:
os.close(output_file_fd)
os.rename(output_file_name, session_file_name)
renamed = True
finally:
if not renamed:
os.unlink(output_file_name)
I don't quite understand how this solve the integrity problem.

Technically this doesn't solve the integrity problem completely. #9084 Addresses this issue.
Essentially this works by using tempfile.mkstemp which is guaranteed to be atomic, and writing the data to that file. It then calls os.rename() which will rename the temp file to the new file. In unix this will remove the old file before renaming, in windows this will raise an error. This should be fixed for django 1.1
If you looked in the revision history you'll see that they previously had locks, but changed them to this method for various reasons.

Related

PyDrive download file from a SHARED drive

I know how to use PyDrive to download a file from my drive, the problem is that I need to download (or at the very least OPEN) an xlsx file on a shared drive. Here is my code so far to download the file:
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
gauth = GoogleAuth()
gauth.LoadClientConfigFile('client_secret.json')
drive = GoogleDrive(gauth)
team_drive_id = 'XXXXX'
parent_folder_id = 'XXXXX'
file_id = 'XXXXX'
f = drive.CreateFile({
'id': file_id,
'parents': [{
'kind': 'drive#fileLink',
'teamDriveId': team_drive_id,
'id': parent_folder_id
}]
})
f.GetContentFile('<file_name>')
The code returns error 404 (file not found), which makes sense: when I check the URL that GetContentFile is looking at, it is the URL leading to my drive, not the shared drive. I am probably missing a 'supportsTeamDrives': True somewhere (but where?).
There is actually a post associated to my question on https://github.com/gsuitedevs/PyDrive/issues/149 where someone raised the exact same issue. Apparently, that brought a developer to modify PyDrive about two weeks ago, but I still don't understand how to interpret his modifications and how to fix my problem. I have not noticed any other similar post on Stack Overflow (not about downloading from a shared drive anyway). Any help would be deeply appreciated.
Kind regards,
Berti
I found an answer in a newer Github post: https://github.com/gsuitedevs/PyDrive/issues/160
Answer from SMB784 works: "supportsTeamDrives=True" should be added to files.py (pydrive package) on line 235-6.
This definitely fixed my issue.
Move from pydrive to pydrive2
After encountering the same problem, I ran into this comment on an issue within the googleworkspace/PyDrive GitHub page from 2020:
...there is an actively maintained version PyDrive2 (Travis tests, regular releases including conda) from the DVC.org team...please give it a try and let us know - https://github.com/iterative/PyDrive2
Moving from the pydrive package to the pydrive2 package enabled me to download a local copy of a file stored on a shared drive by only requiring me to know the file ID.
After installing pydrive2, you can download a local copy of the file within the shared drive by using the following code template:
# load necessary modules ----
from pydrive2.auth import GoogleAuth
from pydrive2.drive import GoogleDrive
# authenticate yourself using your credentials
gauth = GoogleAuth()
gauth.LoadClientConfigFile('client_secret.json')
drive = GoogleDrive(gauth)
# store the file ID of the file located within the shared drive
file_id = 'XXXXX'
# store the output file name
output_file_name = 'YYYYY'
# create an instance of Google Drive file with auth of this instance
f = drive.CreateFile({'id': file_id})
# save content of this file as a local file
f.GetContentFile(output_file_name)

How to use SublimeLinter's cmd attribute

I'm writing a SublimeLinter (a SublimeText plugin) plugin that uses luacheck for a custom syntax we use. So far I have it working with simply cmd = 'luacheck #', the # being apparently replaced by the filename when SublimeLinter calls luacheck. The problem is that with SublimeLinter on 'background' mode, the warnings don't actually update until the file is saved, e.g. if I remove a line containing a warning the warning will still be there, just highlighting a blank space (until I save, that is). I have a feeling that this is because I'm using the # and since this is replaced by the filename, luacheck won't update till the file is updated. However, the SublimeLinter documentation on cmd is not great, and I'm having trouble figuring out how to write one correctly. None of the plugins on their GitHub seem to use #, either. If I copy the default lua plugin (which uses cmd = 'luac -p * -') and use cmd = 'luacheck * -', luacheck executes but only returns an I/O error. Could someone maybe provide some more insight into how SublimeLinter's cmd attribute works?
EDIT: I was able to fix this issue by using tempfile_suffix = 'lua' in linter.py. According to the SublimeLinter docs, this is used for linters that don't use stdin, so I suppose my issue could have been with luacheck instead.
I was able to fix this issue by using tempfile_suffix = 'lua' in linter.py. According to the SublimeLinter docs, this is used for linters that don't use stdin, so I suppose my issue could have been with luacheck instead.

Why are my files smaller after I FTP them using this Python program?

I'm trying to send some files (a zip and a Word doc) to a directory on a server using ftplib. I have the broad strokes sorted out:
session = ftplib.FTP(ftp.server, 'user','pass')
filewpt = open(file, mode)
readfile = open(file, mode)
session.cwd(new/work/directory)
session.storbinary('STOR filename.zip', filewpt)
session.storbinary('STOR readme.doc', readfile)
print "filename.zip and readme.doc were sent to the folder on ftp"
readfile.close()
filewpt.close()
session.quit()
This may provide someone else what they are after but not me. I have been using FileZilla as a check to make sure the files were transferred. When I see they have made it to the server, I see that they are both way smaller or even zero K for the readme.doc file. Now I'm guessing this has something to do with the fact that I stored the file in 'binary transfer mode' <--- whatever that means.
This is where my problems lie. I have no idea at all (yet) what is meant by binary transfer mode. Is it simply that I have to use retrbinary to return the files to their original state?
Could someone please explain to me like I'm a two year old what has happened to my files? If there's any more info required, please let me know.
This is a fantastic resource. Solved most of my problems. Still trying to work out the intricacies of FTPs, but I guess I will save that for another day. The link below builds a function to effortlessly upload files to an FTP without the partial upload problem that I've seen experienced by more than one Stack Exchanger.
http://effbot.org/librarybook/ftplib.htm

Libtorrent - Given a magnet link, how do you generate a torrent file?

I have read through the manual and I cannot find the answer. Given a magnet link I would like to generate a torrent file so that it can be loaded on the next startup to avoid redownloading the metadata. I have tried the fast resume feature, but I still have to fetch meta data when I do it and that can take quite a bit of time. Examples that I have seen are for creating torrent files for a new torrent, where as I would like to create one matching a magnet uri.
Solution found here:
http://code.google.com/p/libtorrent/issues/detail?id=165#c5
See creating torrent:
http://www.rasterbar.com/products/libtorrent/make_torrent.html
Modify first lines:
file_storage fs;
// recursively adds files in directories
add_files(fs, "./my_torrent");
create_torrent t(fs);
To this:
torrent_info ti = handle.get_torrent_info()
create_torrent t(ti)
"handle" is from here:
torrent_handle add_magnet_uri(session& ses, std::string const& uri add_torrent_params p);
Also before creating torrent you have to make sure that metadata has been downloaded, do this by calling handle.has_metadata().
UPDATE
Seems like libtorrent python api is missing some of important c++ api that is required to create torrent from magnets, the example above won't work in python cause create_torrent python class does not accept torrent_info as parameter (c++ has it available).
So I tried it another way, but also encountered a brick wall that makes it impossible, here is the code:
if handle.has_metadata():
torinfo = handle.get_torrent_info()
fs = libtorrent.file_storage()
for file in torinfo.files():
fs.add_file(file)
torfile = libtorrent.create_torrent(fs)
torfile.set_comment(torinfo.comment())
torfile.set_creator(torinfo.creator())
for i in xrange(0, torinfo.num_pieces()):
hash = torinfo.hash_for_piece(i)
torfile.set_hash(i, hash)
for url_seed in torinfo.url_seeds():
torfile.add_url_seed(url_seed)
for http_seed in torinfo.http_seeds():
torfile.add_http_seed(http_seed)
for node in torinfo.nodes():
torfile.add_node(node)
for tracker in torinfo.trackers():
torfile.add_tracker(tracker)
torfile.set_priv(torinfo.priv())
f = open(magnet_torrent, "wb")
f.write(libtorrent.bencode(torfile.generate()))
f.close()
There is an error thrown on this line:
torfile.set_hash(i, hash)
It expects hash to be const char* but torrent_info.hash_for_piece(int) returns class big_number which has no api to convert it back to const char*.
When I find some time I will report this missing api bug to libtorrent developers, as currently it is impossible to create a .torrent file from a magnet uri when using python bindings.
torrent_info.orig_files() is also missing in python bindings, I'm not sure whether torrent_info.files() is sufficient.
UPDATE 2
I've created an issue on this, see it here:
http://code.google.com/p/libtorrent/issues/detail?id=294
Star it so they fix it fast.
UPDATE 3
It is fixed now, there is a 0.16.0 release. Binaries for windows are also available.
Just wanted to provide a quick update using the modern libtorrent Python package: libtorrent now has the parse_magnet_uri method which you can use to generate a torrent handle:
import libtorrent, os, time
def magnet_to_torrent(magnet_uri, dst):
"""
Args:
magnet_uri (str): magnet link to convert to torrent file
dst (str): path to the destination folder where the torrent will be saved
"""
# Parse magnet URI parameters
params = libtorrent.parse_magnet_uri(magnet_uri)
# Download torrent info
session = libtorrent.session()
handle = session.add_torrent(params)
print "Downloading metadata..."
while not handle.has_metadata():
time.sleep(0.1)
# Create torrent and save to file
torrent_info = handle.get_torrent_info()
torrent_file = libtorrent.create_torrent(torrent_info)
torrent_path = os.path.join(dst, torrent_info.name() + ".torrent")
with open(torrent_path, "wb") as f:
f.write(libtorrent.bencode(torrent_file.generate()))
print "Torrent saved to %s" % torrent_path
If saving the resume data didn't work for you, you are able to generate a new torrent file using the information from the existing connection.
fs = libtorrent.file_storage()
libtorrent.add_files(fs, "somefiles")
t = libtorrent.create_torrent(fs)
t.add_tracker("http://10.0.0.1:312/announce")
t.set_creator("My Torrent")
t.set_comment("Some comments")
t.set_priv(True)
libtorrent.set_piece_hashes(t, "C:\\", lambda x: 0), libtorrent.bencode(t.generate())
f=open("mytorrent.torrent", "wb")
f.write(libtorrent.bencode(t.generate()))
f.close()
I doubt that it'll make the resume faster than the function built specifically for this purpose.
Try to see this code http://code.google.com/p/libtorrent/issues/attachmentText?id=165&aid=-5595452662388837431&name=java_client.cpp&token=km_XkD5NBdXitTaBwtCir8bN-1U%3A1327784186190
it uses add_magnet_uri which I think is what you need

QSettings - Sync issue between two process

I am using Qsettings for non gui products to store its settings into xml files. This is written as a library which gets used in C, C++ programs. There will be 1 xml file file for each product. Each product might have more than one sub products and they are written into xml by subproduct grouping as follows -
File: "product1.xml"
<product1>
<subproduct1>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproduct1>
...
<subproductn>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproductn>
</product1>
File: productn.xml
<productn>
<subproduct1>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproduct1>
...
<subproductn>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproductn>
</productn>
The code in one process does the following -
settings = new QSettings("product1.xml", XmlFormat);
settings.setValue("settings1",<value>)
sleep(20);
settings.setValue("settings2", <value2>)
settings.sync();
When the first process goes to sleep, I start another process which does the following -
settings = new QSettings("product1.xml", XmlFormat);
settings.remove("settings1")
settings.setValue("settings3", <value3>)
settings.sync();
I would expect the settings1 to go away from product1.xml file but it still persist in the file - product1.xml at the end of above two process. I am not using QCoreApplication(..) in my settings library. Please point issues if there is anything wrong in the above design.
This is kind of an odd thing that you're doing, but one thing to note is that the sync() call is what actually writes the file to disk. In this case if you want your second process to actually see the changes you've made, then you'll need to call sync() before your second process accesses the file in order to guarantee that it will actually see your modifications. Thus I would try putting a settings.sync() call right before your sleep(20)
Maybe you have to do delete settings; after the sync() to make sure it is not open, then do the writing in the other process?
Does this compile? What implementation of XmlFormat are you using and which OS? There must be some special code in your project for storing / reading to and from Xml - there must be something in this code which works differently from what you expect.