Strongloop / Loopback Getting started missing root.js - loopbackjs

I'm following the getting started documentation from Loopback, and I have an issue when I want to add static files Strongloop getting started Step 3: the file /server/boot/root.js doesn't exists, in addition /server/server.js does not have the 2 lines that were supposed to be there :
// var path = require('path');
// app.use(loopback.static(path.resolve(__dirname, '../client')));
Instead, the /server/middleware.json shows :
"routes": {
"loopback#status": {
"paths": "/"
}
},
Could someone please let me know how to perform this step ? Note : the git repository for Step 3 is good, but not the scaffolded project running slc loopback.

The /server/middleware.json file is where middleware is registered now. The following excerpt is resolving to a file in module's server/middleware directory (loopback-getting-started/node_modules/loopback/server/middleware).
"routes": {
"loopback#status": {
"paths": "/"
}
},
Change this to:
"routes": {
},
Restart the Loopback server and visiting localhost:3000 results in an Express error 404, which would be expected since you no longer have a route defined for /.
You now need to specify in the middleware.json file how to serve static content. You do this in the "files" phase:
"files": {
"loopback#static": {
"params": "$!../client"
}
}
You can now add the following to an index.html file in your /client directory as the original documentation states.
<head><title>LoopBack</title></head>
<body>
<h1>LoopBack Rocks!</h1>
<p>Hello World... </p>
</body>
Restarting the Loopback server and visiting localhost:3000 now serves the index.html page.
More details about how to specify middleware via the new way is located at http://docs.strongloop.com/display/public/LB/Defining+middleware#Definingmiddleware-Registeringmiddlewareinmiddleware.json
Also see https://github.com/strongloop/loopback-faq-middleware

The latest version of LoopBack removed the root.js file. You do not need it anymore, the docs need to be updated to reflect this.

Related

In wagtail how do I setup the v2 api to attach the full base url ( e.g. http://localhost:8000" ) to steamfield images and richtext embeds?

I am currently using nuxt.js as a frontend for my wagtail headless cms. When I load a richtext field or an image block within a streamfield I am unable, or don't know how to, attach the full base url of the wagtail server so when it gets rendered on the nuxtjs site it resolves to src="/media/images/image.png which in the end tries to find the image on the nuxtjs site http://localhost:3000 and it needs to find it on the wagtail server side http://localhost:8000. For standard images I can intercept and prepend the server base url, but not when it comes to anything inside a streamfield.
[EDIT: Better answer below]
I'm not 100% certain this is the "proper" way to do it, but I managed to get what I needed by adding a server middleware that detects anything that starts with the directory /media and prepends the server base url.
// in nuxt.config.js
export default {
serverMiddleware:[
'~/serverMiddleware/redirects'
],
}
then in `serverMiddleware/redirects.js
export default function(req, res, next) {
if (req.url.startsWith('/media')) {
res.writeHead(301, {Location: `http://localhost:8000${req.url}`})
res.end()
} else {
next()
}
}
This is a quick workaround for now I'll see if there is anything better.
Ok, I believe this is the proper solution. It just seemed to evade me :P
Instead of using a redirect simply add a proxy to nuxt.config.js
modules: [
'#nuxtjs/axios',
],
axios: {proxy: true},
proxy: {
'/api/v2/': 'http://localhost:8000',
'/media/': 'http://localhost:8000'
}

next.js export static - S3 - routing fails on page reload

I'm deploying a next.js app as a static export, to an s3 bucket configured for static website hosting.
I use next's build and export commands to generate the out/ directory and then copy that into my s3 bucket
The bucket then contains some files, for simplicity lets say there's just index.html and about.html
The problem is when a user hits index.html via www.website.com then navigates to www.website.com/about everything works, but reloading www.website.com/about fails of course.
www.website.com/about.html finds the correct asset to render the site however
Is there a way to export a static next.js app, host on s3, and have requests to /about proxy /about.html ?
As always, thanks for looking, and thanks even more for participating.
On your next.config.js file at the root of the project.
module.exports = {
trailingSlash: true,
}
Now you can create your pages (e.g. about.jsx) inside the pages directory and Next will create a folder with the file name with an index.html file inside of it when you run the export command.
Give it a try, worked fine here.
The best solution I've arrived at so far, inspired by this gist:
https://gist.github.com/rbalicki2/30e8ee5fb5bc2018923a06c5ea5e3ea5
Basically when deploying the build to the s3 bucket, you can simply rename the .html files to have no .html suffix, ex: www.bucket.com/about.html -> www.bucket.com/about and now both SSR & CSR routing work as expected.
The resulting files have Content-Type: text/html despite not having the suffix, I don't know if this is problematic or not.
RewriteRule ^([^.]+)$ $1.html [NC,L]
worked fine (y)
Here's how Guy Hudash did it:
Assuming you already have a react.js website hosted on s3, you need to first change the S3 Routing Rules in S3 bucket -> Properties -> Static website hosting -> Redirection rules.
The new s3 console changes the routing rules format to JSON, so you’ll need to add this (don’t forget to replace myhost.com):
[
{
"Condition": {
"HttpErrorCodeReturnedEquals": "404"
},
"Redirect": {
"HostName": "myhost.com",
"ReplaceKeyPrefixWith": "#!/"
}
},
{
"Condition": {
"HttpErrorCodeReturnedEquals": "403"
},
"Redirect": {
"HostName": "myhost.com",
"ReplaceKeyPrefixWith": "#!/"
}
}
]
These rules add #!/ as the prefix to the URL in the case of a 403 or 404 error. This solves the react-router issue and now it will load the right page.
Now we would like to remove this prefix from the url for a cleaner solution. You’ll need to edit you _app.js and add:
import {useRouter} from 'next/router';
const MyApp = ({ Component, pageProps }) => {
const router = useRouter();
const path = (/#!(\/.*)$/.exec(router.asPath) || [])[1];
if (path) {
router.replace(path);
}
return (
<Component {...pageProps} />
);
};
export default MyApp;
You may refer to the blog-post itself... or if you are using any other routing solution than NextJS for you application, or looking for more detailed explanation then look at: Mark Biek blog

Update Workbox controlled pre-cached items in "installed" (added to home screen) PWA

I am using Workbox 2 to deal with "offline" behavior of my PWA. The content is produced by HexoJS and is deployed to GitHub Pages. Here is the workbox-cli-config.js for reference:
module.exports = {
globDirectory: "./public/",
globPatterns: ["**/*.html", "**/*.js", "**/*.css", "**/*.png"],
swDest: "public/sw.js",
clientsClaim: true,
skipWaiting: true,
runtimeCaching: [
{
urlPattern: /^https:\/\/use\.fontawesome\.com\/releases/,
handler: "networkFirst"
},
{
urlPattern: /^https:\/\/fonts\.gstatic\.com\/s\//,
handler: "networkFirst"
},
{
urlPattern: /^https:\/\/maxcdn\.bootstrapcdn\.com\/bootstrap/,
handler: "networkFirst"
},
{
urlPattern: /^https:\/\/fonts\.googleapis\.com/,
handler: "networkFirst"
},
{
urlPattern: /^https:\/\/code\.jquery\.com\/jquery-3/,
handler: "networkFirst"
},
]
};
Everything works as expected and the app properly handles switch to the offline mode in Chrome DevTools.
The problem shows up when I update some static content, say HTML, and re-deploy it onto the GitHub pages --- I can see the updated version of the content but not always, and not in all browsers.
I always have to use "Clear browsing data" action in Opera or Chrome (or other browsers) to refresh the appearance of the page, because simple "Refresh/reload" doesn't help.
The problem gets even worser with the "Added to home screen" PWA. I cannot enforce content refresh even by doing uninstall/reinstall. Only wiping off browsing data in Android Chrome browser helps to refresh app content.
My questions are:
Is it possible at all to let pre-cached static assets being automatically updated when I re-visit the page or refresh the installed PWA?
Am I configuring Workbox in a wrong way (see my workbox-cli-config.js above)
Will migration to Workbox 3 make any difference?
I'd be glad to share other config files if this will help to resolve to problem.
PS: The page has the score of 100 in Lighthouse for all criteria except for performance (because of loading blocking content of bootstrap.min.js but read in SO that this is OK).

How to add authorization header in POSTMAN environment?

I'm testing bunch of API calls using POSTMAN. Instead of adding authorization header to each request, can I make it as a part of POSTMAN environment? So, I don't have to pass it with every request.
Yes, you can do this through Postman by assigning your header as an environment variable, let's say authorization, as follow:
then set you environment variable with its value as follow:
In contemporary releases of Postman, you can just set your auth on the collection (or folder), and have every request inherit it (which I believe new requests do by default).
postman usually remembers your key-value pairs you send in header. So there is no need to add headers each request. Anyway you can configure a "Preset" with your auth token.
Not sure if this is what you're looking for, but we use a link-based API that requires auth headers on each request. If you go to Postman > Preferences > General and enable Retain headers when clicking on links, Postman will pass through your auth headers to the child links.
Hope that helps!
If you can't wait here is a work around I just made:
Export your collection (data format v2.1)
Open firefox , dev tools, scratch pad
Paste the code below
Replace the header information with your header
Replace the var a with your contents of the exported .json file
Run the script
The copy(b) command will put the new data with in your clipboard
In postman, click import > Paste Raw Text > Import > as a copy.
Verify your requests have your header, and run it :)
var myHeader = {
"key": "X-Client-DN",
"value": "{{Postman-DN}}",
"description": "The User's DN Interacting with the system."
};
function addHeader(obj, header) {
if (obj.hasOwnProperty('request')) {
obj.request.header.push(myHeader)
}
if (obj.hasOwnProperty('item')) {
obj.item.forEach(function(element) {
element = addHeader(element, header);
});
}
return obj;
}
var a = {
"item": [{}, {
"request": {
"header": []
}
}, {
"item": [{
"request": {
"header": []
}
}]
}]
}
var b = addHeader(a, myHeader);
console.log(JSON.stringify(b, null, 2))
// Might have to run copy manually on console
//copy(b);

Ember + Firebase, locationtype = history

I am hosting my ember app with firebase and would like to use locationType=history (no hash in url) as i am also using fullpage.js, which uses the #.
So my question is: can i configure firebase to only listen for the base-url?
What you are looking for are so-called rewrites.
From the Firebase documentation for rewrites:
Use a rewrite when you want to show the same content for multiple URLs.
URL rewrites can be specified by defining a rewrites section in the firebase.json file:
"rewrites": [ {
"source": "**",
"destination": "/index.html"
} ]
Be sure to read the docs, because there is a lot more that you can do with these rules.