I’m currently trying to check this website for it’s current list of patches and select the newest zip file back and download it. Manually, the zip file automatically pops up after I accept EULA, but in the background it redirects elsewhere (somewhere beyond the scope of network tools/Wireshark etc.)
I’ve been using the request commands to pull the zip, but instead it returns the html of the page that it redirects to (not of the zip itself).
Thus it seems I can accept the EULA, and then it redirects somewhere. But when I create that zip, based off of that the content inside is all wrong.
Is it possible to still save that zip (with requests?) If not, what are the alternatives?
Edit: Maybe an add on, or maybe the real issue. How can I tell if I’m still logged in to a site? I previously used the request session, and posted my login details then checked if it went through. But now I’m starting to feel like it’s not logging me in because the same html I received from the EULA get command is recurring everywhere else. My return looks something like the src code behind LOG IN, also shows when I’m looking at the response from the EULA
Instead of the actual list
Related
First of all, thank you for everything that you do. Without this community, I would hate web design and be reliant on my teacher's outdated, static methods. Much love <3
So, this is a tricky one (maybe).
I want to have, essentially, an iframe on a webpage that contains a website I coded previously. It was a project for school that never went live, but I'd like to include it as part of my portfolio. Problem is, an iframe needs a URL for a source, but I just have the folder with more folders full of code, fonts, and images. How can I tell the browser to populate this box with everything from "name" folder? And then how will it know to run the code instead of just showing a file tree or something?
In the end, I want a page describing a previous web project and let the client experience that project within the one page. And I don't want to get a domain for every project I do.
Maybe there's an easier way I'm not thinking of?
To make it interesting, my new portfolio site is being made in Squarespace...maybe. I bought a domain from them because I had a promo code and wanted to try the platform, but I kind of hate it. I can't change any of the code and it won't maintain a connection to Typekit. So all I can do is change the basic appearance of preexisting elements. It's like WordPress all over again....LAME! Sadly, I already bought the domain.
Can Squarespace just be a host? Is there a way to download the raw code of these templates, edit it, and upload it again?
Thanks for all your help!
I want to have, essentially, an iframe on a webpage that contains a
website I coded previously.
Squarespace's file upload mechanism is very limited. Without using the Developers Platform, there is no effective way to upload many files at once. Furthermore, there is no way to create folders. Therefore, even if you were willing to upload each .html file and each asset one-by-one, there'd be no way to organize the files into folders (assuming that the "tree" you mentioned includes additional sub-folders).
Initially, in order to get the files to be accessible by Squarespace, you'd have to do one of the following:
Use Squarespace Developers Platform (A.K.A. "Developer Mode") and upload your to-be-iframed
(TBI) website files to the "assets" folder using SFTP or Git.
Host your TBI website files somewhere else (a different host
environment, for example) which will maintain your file/folder
structure.
How can I tell the browser to populate this box with everything from
"name" folder? And then how will it know to run the code instead of
just showing a file tree or something?
Assuming that the TBI website has an index.html file or home.html file or similar, and assuming you were to use the Squarespace Developer Platform, you'd insert the iframe either in a Code Block or within a template/.region file directly using something like
<iframe src="/assets/tbiwebsitefolder/index.html"></iframe>
while setting your other iframe attributes (such as height and width) as needed.
Is there a way to download the raw code of these templates, edit it,
and upload it again?
Yes. You select a template and then enable Developer Mode on that template. From there, you use SFTP or Git to download the template files, edit, and reupload.
You may benefit by reviewing some considerations of enabling Developer Mode on a Squarespace Template.
One other idea, to avoid the iframe and Developer Mode entirely, would be to capture images of the TBI website rendered in a browser, and then simply add those images to a gallery block or gallery page. This could allow you to convey the general idea of the project but would of course not capture the full "experience" of it.
I'm really confused with the process of uploading a file (image or pdf in my case) to a Django app programmatically (via HTTP POST or PUT request). I've read the doc but I must admit it hasn't helped me get through. The most confusing part to me is the template (and form) : why do I need to specify a template and form in the view ?
I just want to send a file to a directory, and I'd like to know what exactly is needed in order to do so on the Django part as well as on the request part (content-type... ?)
I'd be really grateful to anyone able to show me some direction here..
You don't say what doc you're reading, so it's hard for us to tell what you mean. But if you're planning on doing a programmatic upload, you don't need a template, of course. You do however need some code that accepts the POST and processes the upload: you can do that with a form, or simply access the data in request.FILES and do what you want with it yourself.
Edit It's true that that page doesn't make any reference to uploading programmatically, because most people's use cases are uploading through the browser, via a form. But the page does explain how to handle uploaded files, which is the only bit that you need.
My setup is: Django 1.3/Python 2.7.2/Win Server 2008 R2/IIS 7.5/MS SQL Server 2008 R2. I am developing an application whose main function is to analyze uploaded files and produce a report.
Reading over the documentation for django-filetransfers, I believe this is a solution to a problem I've been trying to solve for a while (i.e. form-based file uploads completely block all Django responses until the file-transfer finishes...horror for even moderate-sized files).
The documentation talks about piping uploads to S3 or Blobstore, and that might be what I end up doing eventually, but during development I thought maybe I could just set up my own "poor-man's S3" on a server that I control. This would basically just be another Django instance (or possibly a simple ASP.NET app) whose sole purpose is to receive uploaded files. This sounds like it should be possible with django-filetransfers and would solve the problem of Django responsiveness (???).
But I am missing some bits of understanding how this works in general, as well as some specifics. Maybe an example will help: let's say I have MyMainDjangoServer and MyFileUploadServer. MyMainDjangoServer will serve the views, including the upload form. MyFileUploadServer will "catch" the uploaded files. My questions/confusion are as follows:
My upload form will contain additional fields beyond just the file(s)...do I understand correctly that MyMainDjangoServer will somehow still get that form data, minus the file data (basically: request.POST), and the file data gets shunted over to MyFileUploadServer? How does this work? Will MyMainDjangoServer still block during the upload to MyFileUploadServer?
I assume that what I would need to do on MyFileUploadServer is have a view/URL that handles the form request and sucks out the request.FILES data. What else needs to happen? What happens to the rest of the form data?
How would I set up my settings.py for this scenario? The django-filetransfers examples seem to assume either S3 or GAE/Blobstore but maybe I am missing some basics.
Any advice/answers appreciated...this is a confusing and frustrating area of Django for me.
"MyMainDjangoServer will somehow still get that form data, minus the file data (basically: request.POST), and the file data gets shunted over to MyFileUploadServer? How does this work? Will MyMainDjangoServer still block during the upload to MyFileUploadServer?"
I know the GAE Blobstore, presumably S3 as well, handles this by requiring you to give it a success_url. In your case that would be the url on MyMainDjangoServer where your file receiving view on MyFileUploadServer would re-post the non-files form data to once the upload is complete.
Have a look at the create_upload_url method here: https://developers.google.com/appengine/docs/python/blobstore/functions
You need to recreate this functionality in some form (see below).
"How would I set up my settings.py for this scenario?"
You'd need to create your own filetransfers backend which would be a file with a prepare_upload function in it.
You can see the App Engine one here:
https://github.com/django-nonrel/djangoappengine/blob/develop/storage.py
The prepare_upload method just wraps the GAE create_upload_url method mentioned above.
So in your settings.py you'd have something like:
PREPARE_UPLOAD_BACKEND = 'myapp.filetransfers_backend.prepare_upload'
(i.e. the import path to your prepare_upload function)
For the rest you can start with the ones provided by filetransfers already:
SERVE_FILE_BACKEND = 'filetransfers.backends.url.serve_file'
# if you need it:
PUBLIC_DOWNLOAD_URL_BACKEND = 'filetransfers.backends.url.public_download_url'
These rely on the file_field.url being set (see Django docs) and since your files will be on a separate server you probably need to look into writing a custom storage backend for Django too. (the S3 and GAE cases assume you're using the custom Django storage backends from here)
A question like this was asked before and the person got nothing but criticisms, hope this won't be the case here.
I have a website that allows a business to add their menu to my site, and some have requested to be able to import a menu (a pdf or jpg) that is already online elsewhere. So I made a form that saves a url to the db and then that url is used in the src of an iframe on my site.
I tested it all and it worked fine on my local machine (using Django development server). When I synced it over to my production server and saved the same url I was testing with, the iframe loads no content.
I imagine that it has something to do with trying to read an individual file from another server because it works if I make the url google.com or to an image that is under my domain name. Is there anything I can do to fix this? Storing a url instead of a pdf in my db is much more efficient so doing this way is preferred over uploading their menu to my site.
I don't think this question needs any code attached, but if you want to see some let me hear it.
Thanks
The menu you're testing with probably has the X-Frame-Options response header set.
Is there a reason you're putting the image/pdf as the src on an iframe instead of just using the img tag (or putting an img tag inside your iframe)? There's still no guarantee that will work for all pages, as some sites will refuse to serve media to an external page, but I suspect this is your problem in this case.
I have a website that I built using Django. Using the settings.py file, I send myself error messages that are generated from the site, partly so that I can see if I made any errors.
From time to time I get rather strange errors, and they seem to mostly be around about the same area of the site (where I wrote a little tutorial trying to explain how I set up a Django Blog Engine).
The errors I'm getting all appear like something I could have done in a typo.
For example, these two errors are very close together. I never had an 'x' or 'post' as a variable on those pages.
'/blog_engine/page/step-10-sub-templates/{{+x.get_absolute_url+}}/'
'/blog_engine/page/step-10-sub-templates/{{+post.get_absolute_url+}}/'
The user agent is:
'HTTP_USER_AGENT': 'Mozilla/5.0 (compatible; Purebot/1.1; +http://www.puritysearch.net/)',
Which I take it is a scraper bot, but I can't figure out what they would be able to get with this kind of attack.
At the risk of sounding stupid, what should I do? Is it a hack attempt or are they simply trying to copy my site?
Edit: I'll follow the advice already given, but I'm really curios as to why someone would run a script like this. Are they just trying to copy. It isn't hitting admin pages or even any of the forms. It would seem like harmless (aside from potential plagiarism) attempts to dig in and find content?
From your USER_AGENT info it looks like this is a web spider from puritysearch.net.
I suggest you do is put a CAPTCHA code in you website. Program it to trigger when something tries to access 10 pages in 10 seconds (mostly no humans would do this or figure out a proper criteria to trigger your CAPTCHA).
Also, maintain robots.txt file which most crawlers honor. Mention your rules in robots.txt. You can say the crawlers to keep off certain busy sections of your site etc.
If the problem persists, you might want to contact that particular site's system admin & try to figure out what's going on.
This way you will not be completely blocking crawlers (which are needed for your website to become popular) and at the same time you are making sure that your users get fast experience on your site.
Project HoneyPot has this bot listed as a malicious one http://www.projecthoneypot.org/ip_174.133.177.66 (check the comments there) and what you should probably do is ban that IP and/or Agent.