Coldfusion automatically adds full path to links starting with # - coldfusion

I'm working on creating HTML snapshots for an AJAX application that is to use Google AJAX crawling. I'm accessing this page through a cfhttp request. The reason for this is that this app is going to be embedded on sites and those sites use lots of different server-side languages and rewritting it in every language is not practical.
When I create an anchor link, like so,
<a class='icf_btn_small' href='##!year=#YearID#'>#YearID#</a>
Coldfusion outputs to the server a full url, like so:
<a class='icf_btn_small' href='http://example.com:80/snapshots/#!year=2016'>2016</a>
Is this a setting that maybe can be turned off?
Thank you

Related

Editing a previously generated HTML file with Django

I am very new to web development and the following is my use-case :
I have a large number of Bokeh charts, each in a separate HTML file.
In simple terms, I would like to have a home page, where I can provide
links to each one of these charts. However, During runtime, I would
like to edit these separate HTML files, so as to provide a link to go
back to the home page or to other pages. I would not like to modify
the HTML files permanently, so I can make use of them outside of the
web page as well for simple visualization on my system.
What is the best way to do this ? Are there technologies outside Django, I should be looking at to do something like this ?
If most of the content is static, maybe have a look at Jekyll.
The include functionality would let you create one file with the 'link back to the homepage' or in fact further content which you want to avoid repeating (such as navbars, headers, footers).
Bootstrap 4 is your freind for making the site look shiney.
As you're building the site you can run the development server with jekyll serve which allows you to connect to a development server from your browser, and preview changes as you're making them. This would be accessible somewhere like http://localhost:4000/
When you're ready to publish, you can use the jekyll build command, which outputs all of the static HTML files to the _site directory. Notice that at this point, the step of 'putting the homepage link in every page' is handled automatically by Jekyll and you end up with a directory you can upload directly to any hosting platform. The original HTML files/Boken Charts can therefor remain in their original form for use elsewhere.
This method is probably much more effiient than using Django for your use-case, which seems to require serving lots of static content whch already exists. With Django in production you'd need an application server, as well as a webserver and possibly a database which means more things to go wrong.
For bonus points, once you've got the hosting setup, stick the whole thing behind CloudFlare to reduce your hosting costs, and improve access speeds for visitors around the world!
Good luck.
EDIT: response to comment:*
Do you mean that I should abandon django altogether ?
If the purpose is just to serve your exising HTML files to the public, without any requirement for authentication, editing of content by users through the frontend, or more advanced back-end functions, then yes Django is probably overkill for this task.
How is Jekyll different from Django ?
Django is a Python Web Framework, which allows you to build an interactive site on which users or staff can login, post articles, comment, etc. One of its key features is the ability to define database relationships trough 'Models' and then have all the admin-side forms generated automatically in the background. This means, with minimal work, you can instantly have the 'admin portal' side of the site live, which works great for use-cases like large blogs or news sites. You would then build the frontend, which can also be interactive. To launch this into production is a separate task which involves configuring multiple server components.
Jeykll on the other hand is much simpler, and basically gives you a way to create some template HTML files (avoiding the need to repeat code for stuff like navigation bars) and then with the jekyll build command outputs a _site directory which can be uploaded straight to a basic webserver. This is the crucial part, as you then only need a webserver which can serve static content, rather than requiring python, a database, application server like UWSGI, etc
Let's look at this example from the Jeyll Docs with your usecase in mind.
You could define a YML file with a list of all your charts:
docs_list_title: All Charts here.
docs:
- title: A Lovely Bokeh Chart.
url: bokeh_chart_1.html
- title: This Bokeh Chart is even Better
url: bokeh_chart_2.html
You mentioned previously that you already have the HTML files, so really what you've done here is made a list of those, which can be interpreted by the frontend.
The HTML template portion would look something like this:
<h2>{{ site.data.samplelist.docs_list_title }}</h2>
<ul>
{% for item in site.data.samplelist.docs %}
<li>{{ item.title }}</li>
{% endfor %}
</ul>
This would result in a list of links to all of your Charts, with the link text as the title.
Obviously you could then go further and add further info to the YML file, like beneath each url put publisher: someone which could then be accessed in the template's for loop as {{item.publisher}}
Can such tools like Jekyll, Django and Bootstrap be used together ?
Bootstrap can be used with Django or Jekyll, as it is a CSS library which controls how HTML is rendered in the user's browser. Check the documentation for more examples of its capability.
A good starting point may be to download a theme from somewhere like Start Bootstrap. Once you have that as a ZIP file, you can put it in your Jeykll project and attempt to have it render through the dev server with jekyll serve. You can then remove nav bar or header code to separate include files (see my earlier link to the Jeykll docs) and before you know it you'll be seeing progress.
The best way to learn is to just go ahead and try this!

Possible to use page on external site as content for MailChimp?

We've got a WordPress site and I've built a page that pulls from different sections of our site which I'd like to use as the content for a bi-weekly MailChimp newsletter. Is there anyway to automate pulling in a div on our site into the body of a MailChimp template?
All the tools I've found pull in the page as "an article" and just put an image and headline into the message body, rather than the full page verbatim.
Not adverse to doing some coding, but not sure how to start.
Thanks for any suggestions.
I can think of two different routes you might be able to try. The first is to generate an RSS feed for the content you're talking about and then use an RSS Campaign to send the email. Depending on how you have this data stored on your site, WordPress might already be generating an RSS feed for you for that content.
The second option involves more coding. If you create a template with an editable section you can then pass in the content of that section via the API. This is probably harder, since the campaign content APIs are pretty convoluted in v2.0. v3.0 should make that easier, but it's still in beta.

How Google crawls Angular-based apps with HTML5 urls

I'm building a web application with the following url structure:
/ is the landing page, not angular based
/choose uses Angular, it basically contains search
/fund/<code> also with Angular, contains specific data for a certain fund
There's no problem indexing /, it's just a plain and simple html, already SEO optimized. But I need both /choose and /fund/... being crawled by Google, that's the problem.
My app uses the HTML5 mode, and we never point to the app urls using hashbangs like foo.com#!/choose, always foo.com/choose.
Also, according to Google's docs on that matter, I put <meta name="fragment" content="!"> on the head of every Angular page we have. But using "fetch as google" to inspect my site, I can't realise how Google's asking the pages for my server. I'm using Django on the backend and I built a middleware to catch _escaped_fragment_ and act on it, but Google's never sending it.
So, simply put, my questions are:
Why isn't Google fetching my urls using _escaped_fragment_?
How Google will fetch the pages?
foo.com?_escaped_fragment_=/choose
foo.com/choose?_escaped_fragment_=
According to the google specs, You should use
foo.com/choose?_escaped_fragment_=hashfragment.
But As mentioned here, you don't seem to need hashfragment and the equal sign part since your url is already mapped on your Django server side. So, get rid of it and give it a try.
Your final url should look like this: foo.com/choose?_escaped_fragment_.
Hope it helps!

How to prevent Django pages from refreshing after submit?

I am using the Django template system. What I want is, when I submit a form, or click to an url link, page does not refreshes, but loads with the data returning from the server. Is it possible?
I recommend a combination of jQuery (easy, powerful, popular javascript library) and dajax/dajaxice (http://www.dajaxproject.com/). Dajax is very easy to set up and use, and jQuery is also easy to set up and use. Dajax is strictly for AJAX communications through Django. jQuery is perfect for taking a simple site and making it more fluid, intuitive, and user-friendly.
You need JavaScript to do that. What you are looking for is called AJAX (Asynchronous JavaScript and XML). Essentially, it means you use JavaScript to send a request to the server as soon as the link/button is clicked. The server returns some data to your Script, which then can be used to manipulate the HTML page, e.g. by inserting the responded data into the DOM. Since you do everything with JavaScript, no reloading of the whole page is required.
To start, read the AJAX tutorial. There are certain JavaScript libraries that make these things more simple for you (e.g. jQuery), but you really should understand how this stuff works first, since else you might get into trubble while trying to debug it.

Emulating a web browser to wrap the functionality of several similar web sites

I'm interested in emulating the functionality of a web browser in C++ so that I can create a wrapper for several web sites. Right now, the biggest issues with these sites are that they make heavy use of JavaScript that interacts with the HTML DOM. Thus, the simple solution of using curl to download the page, and something like RapidXML to parse its contents is out.
Next, I considered using something like v8 with curl, and that solves the issue of interpreting the JavaScript on the page nicely. However, it doesn't solve the issue of connecting the HTML DOM methods with the JavaScript; in other words, document.getElementById() would fail in v8.
Next, I considered WebKit, which seems like it's perfectly suited to emulate a web browser--after all, Chromium and Safari both utilize it in their web browsers. However, it's a little too complete. I don't need all of the rendering aspects it includes.
So, I'd be looking for some way to:
Make an SSL connection to a web site
Interpret the JavaScript on that web site in connection with the HTML DOM
Set the value of the username/passwords <input> fields with my username and password
Simulate clicking the "Submit" button by calling the formSubmit() function, from <input type="button" onClick="formSubmit()">
Handle the HTTP POST form action and the subsequent HTTP 301 and JavaScript redirects (accomplished using window.location)
Repeat 2-5 as needed
Besides what I've already considered, what other options do I have? Ideally, I'd want this to be extremely lightweight, without requiring linking to many libraries.
I'm primarily concerned with developing for Windows 7 64-bit.
Well, this sounds all too much like a brute-force program. Disregarding that, and since you don't seem to need to render any website, I think you should just fetch the file through cURL or something, then parse it, check for the form through using a regex, retrieve the form action, then make a request using the method taken from the <form> tag and whichever input you want.
Problem is, there would be no proper way to know when is it that you've logged in properly, unless you made some kind of per-site checking. This comes mainly from the fact that many sites use sessions rather than direct cookies or HTTP auth, and since you can't read from sessions directly, it is impossible for you to guess when the session has changed.
That's the most lightweight solution I can come up with right now.