Can you use Lottie-Files / bodymovin on a github pages account - github-pages

BodyMovin / LottieFiles - requires that you have your After Effects animation in a JSON format and stored on your server.
Because GitHub Pages is a static site with no backend, does this mean that you cannot use LottieFiles with GitHub Pages?
If you can, does anyone have an example of this being done, please -
I have used bodymovin with Json After effects file to run a animation on other sites just not github pages

does this mean that you cannot use LottieFiles with GitHub Pages?
Yes, the lottiefile, even declared as a webplayer in <lottie-player> element, would not be interpreted by GitHub.
One alternative/workaround would be to embed a video in your GitHub page in order to show what the animation would look like.
It is possible with Jekyll-pages.

Related

Embedding a functional website inside a Squarespace webpage

First of all, thank you for everything that you do. Without this community, I would hate web design and be reliant on my teacher's outdated, static methods. Much love <3
So, this is a tricky one (maybe).
I want to have, essentially, an iframe on a webpage that contains a website I coded previously. It was a project for school that never went live, but I'd like to include it as part of my portfolio. Problem is, an iframe needs a URL for a source, but I just have the folder with more folders full of code, fonts, and images. How can I tell the browser to populate this box with everything from "name" folder? And then how will it know to run the code instead of just showing a file tree or something?
In the end, I want a page describing a previous web project and let the client experience that project within the one page. And I don't want to get a domain for every project I do.
Maybe there's an easier way I'm not thinking of?
To make it interesting, my new portfolio site is being made in Squarespace...maybe. I bought a domain from them because I had a promo code and wanted to try the platform, but I kind of hate it. I can't change any of the code and it won't maintain a connection to Typekit. So all I can do is change the basic appearance of preexisting elements. It's like WordPress all over again....LAME! Sadly, I already bought the domain.
Can Squarespace just be a host? Is there a way to download the raw code of these templates, edit it, and upload it again?
Thanks for all your help!
I want to have, essentially, an iframe on a webpage that contains a
website I coded previously.
Squarespace's file upload mechanism is very limited. Without using the Developers Platform, there is no effective way to upload many files at once. Furthermore, there is no way to create folders. Therefore, even if you were willing to upload each .html file and each asset one-by-one, there'd be no way to organize the files into folders (assuming that the "tree" you mentioned includes additional sub-folders).
Initially, in order to get the files to be accessible by Squarespace, you'd have to do one of the following:
Use Squarespace Developers Platform (A.K.A. "Developer Mode") and upload your to-be-iframed
(TBI) website files to the "assets" folder using SFTP or Git.
Host your TBI website files somewhere else (a different host
environment, for example) which will maintain your file/folder
structure.
How can I tell the browser to populate this box with everything from
"name" folder? And then how will it know to run the code instead of
just showing a file tree or something?
Assuming that the TBI website has an index.html file or home.html file or similar, and assuming you were to use the Squarespace Developer Platform, you'd insert the iframe either in a Code Block or within a template/.region file directly using something like
<iframe src="/assets/tbiwebsitefolder/index.html"></iframe>
while setting your other iframe attributes (such as height and width) as needed.
Is there a way to download the raw code of these templates, edit it,
and upload it again?
Yes. You select a template and then enable Developer Mode on that template. From there, you use SFTP or Git to download the template files, edit, and reupload.
You may benefit by reviewing some considerations of enabling Developer Mode on a Squarespace Template.
One other idea, to avoid the iframe and Developer Mode entirely, would be to capture images of the TBI website rendered in a browser, and then simply add those images to a gallery block or gallery page. This could allow you to convey the general idea of the project but would of course not capture the full "experience" of it.

Customizing sso_redirect.html in wso2 IS 5.0

I'm trying to customize the sso_redirect.html page in WSO2 IS 5.0 SP1 found in location IS_HOME\repository\resources\security\sso_redirect.html.
Though any javascript or inline css changes are getting reflected in this page, any reference to images are not honored. for e.g the tag doesnt fetch the image on the page. Is there any limitation on this front?
Thanks in advance.
Cijoy
This page could be customized by defining all the resources(style sheet, images, java script, etc..) references as url instead of relative path. Then those resources may be available at the page loading time.

ember hash urls in google

I am concerned about page ranking on google with the following situation:
I am looking to convert my existing site with 150k+ unique page results to a ember app, off the route. so currently its something like domain.com/model/id - With ember and hash change - it will be /#/model/id. I really want history state but lack of IE support doesn't leave that as a option. So my Sitemap for google has lots and lots of great results using the old model/id. On the rails side I will test browser for compatibility, before either rendering the JS rich app or the plain HTML / CSS. Does anyone have good SEO suggestions with my current schema for success.
Linked below is my schema and looking at the options -
http://static.allplaces.net/images/EmberTF.pdf
History state is awesome but it looks like support is only around 60% of browsers.
http://caniuse.com/history
Thanks guys for the suggestions, the google guide is similar to what I'm going to try. I will roll it out to 1 client this month, and see what webmasters and analytics show.
here is everything you need to have your hash links be seo friendly: https://developers.google.com/webmasters/ajax-crawling/
basically You write Your whole app with hashlinks, but You have to add "!" to them, so You have #!/model/id. Next You must have all pages somewhere generated and if google asks for them, return "plain html" as described here: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started
use google webmaster tools to check if Your site is crawlable.
I'm not sure if you're aware that you can configure Ember to use the browser history for the location API and keep using your pages the way they are reference now. All you need to do is configure the Route's location property
App.Router.reopen({
location: 'history'
});
See more details about specifying the location api here

how does google create the instant preview images?

I was wondering how google is capturing all those websites that are featured in google's instant preview? I'm sure they are not using a thumbnail service (like www.thumbalizr.com, websnapr.com, snapcasa.com, thumbshots.com) but rather use their own software. BUT: given that google captures A LOT of websites, they must have a very sophisticated system. PLUS: this generates HUGE amounts of data (jpgs?).
Does somebody have more insight into how google does this?
Yes, it's something like that. Their webmaster pages hint that they render the page with the same engine Chrome uses, and the preview is based on the result.
It's hard to say, but here's some info from a Google project manager discussing it:
http://googleblog.blogspot.com/2010/11/beyond-instant-results-instant-previews.html
It says in part:
"we match your query with an index of the entire web, identify the
relevant parts of each webpage, stitch them together and serve the
resulting preview completely customized to your search—usually in
under one-tenth of a second"
That plus looking at the source of a preview page suggests that they're using their own index (the same webcache.googleusercontent.com that is used to serve the Cached pages) to serve JPEG Base64 image strings as screenshots.

How to make website available offline

I want to make my website available offline even if the user clears the cache and cookies. Is is possible? Also I am dealing with database. Is is possible to handle databases offline?
A user could store a local copy of a single webpage using Chrome (right click save-as) and it will store all resources (images, css, js) required to fully load the page offline. Other browsers will have similar options.
You can use wget to mirror a whole website for offline browsing.
wget --mirror --convert-links --html-extension -p http://www.example.com/
of course neither of these options will handle database driven elements of your site/page.
If you want to mock a database or dynamic elements of a page offline then Google Gears is probably the closest to what you are looking for but I think it was deprecated by Google last year.
If your users have modern browsers, try HTML5 Application Cache.
References:
Overview - http://www.html5rocks.com/en/features/offline
Demo - https://jonathanstark.com/labs/app-cache-7/
Tutorial - https://www.html5rocks.com/en/tutorials/appcache/beginner/
Article - http://grinninggecko.com/developing-cross-platform-html5-offline-app-1/
Summary: Click me, I'm the newish thing that browsers now support!
I clicked some of the links found in other answers, and all tools mentioned are deprecated or will/should be soon.
Later when I wasn't connected to the internet, I opened a site operated by Google (either Google Docs or YouTube, I sadly forgot since then) and went to view the page source, as I was curious to see other answers in action. I found something called ORIGIN-TRIAL in the manifest file.
After a quick Google search, I found this, which brought me to this, which somehow brought me to the last link:
https://developers.google.com/web/fundamentals/primers/service-workers
In conclusion, use Service Workers now. If you're curious if it now works with all browsers, don't worry. All popular browsers should support it as seen here.
No, if your databases are housed online. then you need a internet connection for the PHP/ASP (whatever you're using to deal with DBs) to connect/communicate to the DB's
For storing data locally and accessing them offline take a look at Gears and Web Storage.
The main problem is what degree of functionality you want to provide with your website. It always requires some work on the client (user) side to "store" aka. save your website offline. You would have to store all your functionality in one page that the user stores (be it a Flash movie or some Javascript-Code).
You can use simple command to download whole website locally with all links working properly.
wget -rk 'http://www.website.com'
For https url you need to add one more property like below :
wget -rk --no-check-certificate 'https://www.website.com'