Sitecore Mixed Content error for workflow - sitecore

We have added a sample workflow for our current site. For some of the environments we have a https connection. When we tried to use the workflow for https we got the error:
Mixed Content: The page at 'https://site/sitecore/shell/default.aspx?sc_lang=en' was loaded over HTTPS, but requested an insecure resource 'http://site//sitecore/shell/default.aspx?xmlcontrol=Workbox&mo=preview&reload=1&{190B1C84-F1BE-47ED-AA41-F42193D9C8FC}=0&{46DA5376-10DC-4B66-B464-AFDAA29DE84F}=0&{FCA998C5-0CC3-4F91-94D8-0A4E6CAECE88}=0'. This request has been blocked; the content must be served over HTTPS.
I am using Sitecore 8.1 Update 2.
You can see there 3 ids for:
190B1C84-F1BE-47ED-AA41-F42193D9C8FC Draft state Id for sample workflow
46DA5376-10DC-4B66-B464-AFDAA29DE84F Awaiting aproval state Id for sample
workflow
FCA998C5-0CC3-4F91-94D8-0A4E6CAECE88 Approved state Id for sample workflow
The problem seems to be on workflow reload
Is there a setting in sitecore workflows that I can set for the workflow to use https urls?

Related

Problem handling cookies for Blazor Server using OpenID server (Keycloak)

I have a baffling issue with cookie handling in a Blazor server app (.NET Core 6) using openid (Keycloak). Actually, more than a couple which are may or may not linked. It’s a typical (?) reverse proxy architecture:
A central nginx receives queries for services like Jenkins, JypyterHub, SonarQube, Discourse etc. These are mapped through aliases in internal IPs where the nginx can access them. This nginx intercepts URL like: https://hub.domain.eu
A reverse proxy which resolves to https://dsc.domain.eu. This forwards request to a Blazor app running in Kestrel in port 5001. Both Kestrel and nginx under SSL – required to get the websockets working.
Some required background: the Blazor app is essentially a ‘hub’ where its various razor pages ‘host’ in iframe-like the above mentioned services. How it works: When the user asks for the root path (https://hub.domain.eu) it opens the root page of the Blazor app (/).
The nav menu contains the links to razor pages which contain the iframes for the abovementioned services. For example:
The relative path is intercepted by the ‘central’ nginx which loads Jenkins. Everything is under the same Keycloak OpenID server. Note that everything works fine without the Blazor app.
Scenarios that cause the same problem
Assume the user logins in my app using the login page of Keycloak (NOT the REST API) through redirection. Then proceeds to link and he is indeed logged in as well. The controls in the App change accordingly to indicate that the user is indeed authenticated. If you close the tab and open a new one, the Blazor app will act as if it’s not logged in while the other services (e.g Jenkins) will show the logged in user from before. When you press the Login link, you’ll be greeted with a 502 nginx error. If you clean the cookies from browser (or in private / stealth mode) everything works again. Or of you just log off e.g. from Jenkins.
Assume that the user is now in a service such as Jenkins, SonarQube, etc. if you press F5 now you have two problems: you get a 404 Error but only on SOME services such as Sonarcube but not in others. This is a side problem for another post. The thing is that Blazor app appears not logged in again by pressing Back / Refresh
The critical part of Program.cs looks like the following:
This class handles the login / logoff:
Side notes:
SaveTokens = false still causes large header errors and results in empty token (shown in the above code with the Warning: Token received was null). I’m still able to obtain user details though from httpContext.
No errors show up in the reverse proxy error.log and in Kestrel (all deployed in Linux)
MOST important: if I copy-paste the failed login link (the one that produced the 502 error) to a "clean" browser, it works fine.
There are lots of properties affecting the OpenID connect, it could also be an nginx issue but I’ve run out of ideas the last five days. The nginx config has been accommodated for large headers and websockets.
Any clues as to where I should at least focus my research to track the error??
The 502 error shows an error at NGINX's side. The reverse proxy had proper configuration but as it turned out, not the front one. Once we set the header size to suggested size, everything played out.

Why website is not auditable or readable for google?

Have created a staging website on Herokuapp and production on AWS but whenever trying to audit or speed testing it's giving an attached error
http://findmy-web.herokuapp.com/ staging -> hosted on heroku
https://www.gocatchy.com/ live -> hosted on AWS
Technology used = node and react
folks issue is resolved.
As I checked server-side config there was context URL response with status code 400
res.status(400).send(fullMarkup);
have updated with 200 and its works!

<cfajaxproxy> Locked-Down production Lucee

Setting up production lucee box, having issues locating ajax library in lucee server. My browser unable to find ajax library showing 404 error.
I am not sure this is because of firewall or lucee server configuration issue.
My development and staging working fine only having issue in production server.
Request URL: https://example.com/mapping-tag/lucee/core/ajax/JSLoader.cfc?method=get&lib=LuceeAjax
Request Method: GET
Status Code: 404
Remote Address: 201.10.26.29:443
Referrer Policy: no-referrer-when-downgrade
Please advise..
With an Adobe CF server, the JS files related to cfajaxproxy are in the /CFIDE/scripts/ folder. The /CFIDE/ folder is removed from public access when the server is locked down. To allow access to the JS files for the UI and ajax tags, you can specify an alias in CF Admin for that folder.
For example, /cfjs would map to /CFIDE/scripts in CF Amin, so CF will generate that path for cfajaxproxy use. You'd have to create this folder alias in IIS or whatever web server you're using.
If on Lucee, the folder /lucee/core/ is blocked when locked down, there should be a similar solution for that engine.

updating Beans marked as RefreshScope

Here is my scenario:
My micro service gets notified about some changes of its configuration from the central conf server. It can be partial update, or a full.
I use #RefreshScope mark on relevant beans. The question is how to update marked beans, I mean reload them.
Just to clarify: From Spring Cloud I use only RefreshScope.
Any ideas?
Add dependency org.springframework.boot:spring-boot-starter-actuator in your project.
Refreshing configuration via call refresh endpoint.
For example, you configure your management endpoint like below. curl -X POST http://localhost:8001/manage/refresh will trigger refreshing changed configuration.
management:
context-path: /manage
port: 8001
security.enabled: false
If you have different components that are affected by changes, then it's good to keep your configurations on repository and then you can add a publish-subscribe model for refreshing context in which all the affected components are to subscribe to an event that is published by your repository as a result of configurations change.
And for refreshing context we have two options:
Hit refresh endpoint of your app by a post request.
Get RefreshEndpoint bean by autowiring it, then apply refreshEndpoint.refresh(). This will refresh context at runtime.
In both solutions mark the beans of interest by #RefreshScope.

How to access Amazon images with https (AWSECommerceService)

For each product on my website I have a page that promotes a few book from Amazon. I get the books using a query to AWSECommerceService from my web server. The XML I receive from Amazon contains a list of books with information such as title, price, image-url, etc. I use those info to generate my website page.
The images URLs provided by Amazon are all HTTP, while I need to publish them using an the HTTPS protocol in order to avoid warnings for the page visitors at the browser lever. Just replacing HTTP with HTTPS doesn't work.
Example:
http://ecx.images-amazon.com/images/I/51tD0SDNMeL.SX166.jpg => OK
https://ecx.images-amazon.com/images/I/51tD0SDNMeL.SX166.jpg => ERR_CERT_COMMON_NAME_INVALID
Any suggestion?
I just found out that the same images can be accessed via HTTPS on a different amazon.com sub-domain:
Replacing 'http://ecx.images-amazon.com' with 'https://images-na.ssl-images-amazon.com' will generate a perfectly working URL.
The image in the example in my question can be successfully accessed via https at the following URL:
https://images-na.ssl-images-amazon.com/images/I/51tD0SDNMeL.SX166.jpg