How to obtain and apply service packs - wso2

I am having the same problem as described in this jira about CORS headers. It has been fixed, but the release is showing as Fixed with r205117 (the commit).
Is there a way to obtain service packs, or do I have to build the product myself ?
As #Asela said you can build this fix for yourself.
Personally I found the instructions listed in the documentation rather lacking so thought I'd post an update on how I managed to patch it. Once I found a way to get this to work it was simple, but took a while to discover how to.
Download source
As per docs, but be aware it's a 4G+ download
svn checkout https://svn.wso2.org/repos/wso2/carbon/platform/tags/turing-chunk11 ~/wso2.chunk11
Patch code
cd to ~/wso2.chunk11/components/apimgt/org.wso2.carbon.apimgt.gateway/1.2.2/
edit src/main/java/org/wso2/carbon/apimgt/gateway/handlers/Utils.java
Build JAR
I used mvn clean compile install
Patch product
cd to product home, in my case ~/wso2am-1.7.0
cd to patches ./repository/components/patches/
mkdir and cd for the patch, in my case mkdir patch0009 ; cd patch0009
copy in the new jar there cp ~/wso2.chunk11/components/apimgt/org.wso2.carbon.apimgt.gateway/1.2.2/target/org.wso2.carbon.apimgt.gateway-1.2.2.jar .
start the product and the patch should apply.
Test patch
Assuming you've added the '*' to ~/wso2am-1.7.0/repository/conf/api-manager.xml:
<Access-Control-Allow-Origin>*</Access-Control-Allow-Origin>
You can just curl an API and see the correct header:
curl -i -X OPTIONS --header 'Content-Type: application/json' --header 'Origin: http://somewhere.com' http://192.168.1.64:8280/myApi
An check the correct domain setting is returned:
Access-Control-Allow-Origin: *

It has been fixed in the APIM 1.8.0 which has not been released yet. Therefore you may need to wait for 1.8.0 official release. I think, if you can obtain support from WSO2, they would give it as a patch. If not, you have following options.
You can find the source change with r205117 and apply it to the older APIM version (say APIM 1.7.0). To do that you need to build the relevant jar file with above diff. Ordinal source for jar file can be found at here and Source diff can be found at here. You can apply the source diff and build the jar file, then replace the orignal jar file which can be found in the APIM 1.7.0
You can build the 1.8.0 product by yourself. It would be little bit hard to build all the product and get a new pack of 1.8.0.

Related

Methods to automate ColdFusion Administrator settings

When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19

Unable to update drupal 8.4.3 to 8.4.5 using composer

I'm following this guide to "Update core via Composer" and I have my backups. The document says to run
composer update drupal/core --with-dependencies
When I do, I get:
Loading composer repositories with package information
Updating dependencies (including require-dev)
Nothing to install or update
Generating autoload files
> Drupal\Core\Composer\Composer::preAutoloadDump
> Drupal\Core\Composer\Composer::ensureHtaccess
drush core-status says I'm on version 8.4.3, so I expected the composer update command to move me to 8.4.5.
Later the document says to "Review the status report page for errors" and the only error listed is that I need to be on version 8.4.5
There must be something missing from the update documentation, and I can't figure out what it is.
Edit: Thanks Robb Davis, I tried:
rm -rf vendor
rm composer.lock
composer update drupal/core --with-dependencies
That gave me no change -- leaving me with an 8.4.3 installation.
I tried composer require drupal/core:8.5 which gave me:
$ composer require drupal/core:8.5
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.
Problem 1
- Installation request for drupal/drupal dev-master -> satisfiable by drupal/drupal[dev-master].
- don't install drupal/core 8.5.0|remove drupal/drupal dev-master
- Installation request for drupal/core 8.5 -> satisfiable by drupal/core[8.5.0].
Installation failed, reverting ./composer.json to its original content.
Nothing in that message makes any sense to me. So I'm still lost.
I was trying to update 8.5 to 8.6. Dumping the core and vendor folders in addition to the composer.lock file did not work for me.
I was able to resolve this issue by removing the core/composer.json line from the merge-plugin include array and running composer update drupal/core --with-dependencies.
My merge-plugin key in the composer.json file in the docroot looks like:
"merge-plugin": {
"include": [],
"recurse": true,
"replace": false,
"merge-extra": false
}
yes this is kind of foobar. There is a blog post about it here:
https://orkjern.com/updating-to-drupal-85-with-composer
The solution in the blog didn't work for me but a solution in the comments did:
Delete your vendors directory
Delete your .lock file
Run composer update drupal/core --with-dependencies
This will rebuild / redownload everything but it seems to work and updates properly to 8.5 (the most recent stable version of core).

How to install hexo blog in a remote repo to local machine?

I'm using hexo in github page. Mistakingly I deleted my local file in my local machine. I tried to make a new local file again by using git clonehttps://github.com/aaayumi/aaayumi.github.io.git. Then I installed npm install hexo-cli -g.
I could install all necessary files but when I typed hexo deploy,
it shows,
hexo deploy
Usage: hexo <command>
Commands:
help Get help on a command.
init Create a new Hexo folder.
version Display version information.
Global Options:
--config Specify config file instead of using _config.yml
--cwd Specify the CWD
--debug Display all verbose messages in the terminal
--draft Display draft posts
--safe Disable all plugins and scripts
--silent Hide output on console
For more help, you can use 'hexo help [command]' for the detailed information
or you can check the docs: http://hexo.io/docs/
Is there an way to be able to use hexo blog locally?
The code in https://github.com/aaayumi/aaayumi.github.io is not the source code of your blog, it is just the generated content. What you need are the original markdown files that were inside your source folder.
You will have to recreate the blog with hexo init and rewrite your blog posts .. Sorry for that.
Of course you can look at your website directly (http://ayumi-saito.com/) and rewrite the posts, copy pasting from there which should not take that long.
Also to make sure this does not happen again, you can publish your blog source files in a different repository. So that there is always a copy somewhere.
PS: Thanks for using my theme ;)

How to force GitHub Pages build?

Every GitHub repository can have (or be) a GitHub Pages website, that can be built with Jekyll. GitHub builds the site every time you push a new commit.
Is there a way to force the refresh of the Github Pages website without pushing a new commit?
From GitHub support, 2014-06-07:
It's not currently possible to manually trigger a rebuild, without pushing a commit to the appropriate branch.
Edit:
As Andy pointed out in the comments, you can push an empty commit with the command:
git commit -m 'rebuild pages' --allow-empty
git push origin <branch-name>
Edit 2:
Thanks to GitHub Actions, it's fairly easy to trigger a daily publish: https://stackoverflow.com/a/61706020/4548500.
If you want a quick script solution, here it is. Just do the following tasks only once, and run the script whenever you want to rebuild your GitHub page.
1. Create a personal access token for the command line:
Follow the official help here to create a personal access token. Basically, you have to log in your GitHub account and go to: Settings > Developer settings > Personal access tokens > Generate new token.
Tick repo scope.
Copy the token.
2. Create the following script:
Create a file called RebuildPage.sh and add the lines:
#!/bin/bash
curl -u yourname:yourtoken -X POST https://api.github.com/repos/yourname/yourrepo/pages/builds
Here,
Replace yourname with your GitHub username.
Replace yourtoken with your copied personal access token.
Replace yourrepo with your repository name.
3. Run the script:
If you use Windows 10:
You need to setup Windows Subsystem for Linux, if not already done. Follow this to do so.
Remove the first line (#!/bin/bash) from the script and save the script as RebuildPage.bat. (i.e., replace .sh with .bat in the script file name)
Alternative to the above point: To get the double-click feature for running the .sh file:
Set bash.exe as the default program for .sh files.
Open regedit.exe and edit HKEY_CLASSES_ROOT\Applications\bash.exe\shell\open\command. Set the (Default) value to:
"C:\Windows\System32\bash.exe" -c " \"./$(grep -oE '[^\\]+$' <<< '%L')\";"
Now double-click the script wheneven you want to rebuild your GitHub page. Done!
If you use Linux/Mac, running the script is as same as running other scripts. Done!
Additional notes for the solution:
This solution utilizes a API of GitHub REST API v3. Here is the official documentation for the API.
Now that GitHub Actions are available, this is trivial to do:
# File: .github/workflows/refresh.yml
name: Refresh
on:
schedule:
- cron: '0 3 * * *' # Runs every day at 3am
jobs:
refresh:
runs-on: ubuntu-latest
steps:
- name: Trigger GitHub pages rebuild
run: |
curl --fail --request POST \
--url https://api.github.com/repos/${{ github.repository }}/pages/builds \
--header "Authorization: Bearer $USER_TOKEN"
env:
# You must create a personal token with repo access as GitHub does
# not yet support server-to-server page builds.
USER_TOKEN: ${{ secrets.USER_TOKEN }}
Sample repo that does this: https://github.com/SUPERCILEX/personal-website/actions
Pages API: https://developer.github.com/v3/repos/pages/#request-a-page-build
I had this problem for a while, and pushing to master branch didn't change anything on myapp.github.io, for two reasons :
1 - Build
No matter how many time I tried to push my work on master, build would not start. I found a workaround by modifying my file in Github online editor (open your index.html and edit it on Github website, then commit)
2 - Caching issues
Even after a successful build, I would still see the exact same page on myapp.github.io, and hard reloading with Ctrl + Shift + R wouldn't solve it. Instead, if using Chrome, inspect your page, head into the Application tab, select "Clear storage" in the left menu, and click on "Clear site data" at the bottom of the menu.
Even after I pushed my changes to GitHub repository, I was not able to see the changes today. Then I checked my repository settings for more information, there I could see, all these times the build was failing and that was the reason I was not able to see the changes.
You may also see a message as "Your site is having problems building: Unable to build page. Please try again later."
Then I was checking my recent commits and tried to find out what causes this issue. At the end I was able to fix the issue.
There was an additional comma in the tags (,) and that caused this issue.
You will not get relevant error messages if there are any issues in your .md file. I recommend you to check for the build status and compare the changes if you are facing the same issue.
This is doable as of v3 of the GitHub API, though it is currently in preview
https://developer.github.com/v3/repos/pages/#request-a-page-build
POST /repos/:owner/:repo/pages/builds
The empty commit didn't work for me, but based on #benett answer, this worked for me:
Open Postman, create a new request with this URL: https://api.github.com/repos/[user_name]/[repo_name]/pages/builds (replace with your name and repo), and select POST method.
Before you run it, go to the headers tab and add a new key Accept with the value application/vnd.github.mister-fantastic-preview+json
Now you can run it and visit your pages again.
I was having trouble refreshing even though my Github Actions was showing that my site has been deployed.
Toggling the publishing source did the trick for me. I switched the publishing source from master to content and then back to master. You can check how to change the publishing source of the branch here
I went through the same problem, to solve it I developed a githu action that works with scheduler and supports updating multiple gh-pages at the same time.
https://github.com/marketplace/actions/jekyll-update-github-pages-without-new-commit, the action update gh-pages without generate new commits.
name: Update all github pages
on:
schedule:
- cron: "30 0 * * *"
jobs:
github-pages:
runs-on: ubuntu-latest
name: Update Github Pages Initiatives
steps:
- name: Jekyll update github pages without new commit
uses: DP6/jekyll-update-pages-action#v1.0.1
with:
DEPLOY_TOKEN: ${{ secrets.GH_PAGES_DEPLOY_TOKEN }}
USER: ${{ secrets.GH_PAGES_USER }}
FILTER: 'is%3Apublic%20org%3Adp6'
Log action
Alternative Solution
You may have received an email from GitHub telling you that Jekyll did not succeed at building your site when you pushed it to your gh-pages. If this is the case, you can try to force push to trigger another build.
If you use a dedicated folder for the final website, let's say a public folder, you can try to rebuild your folder and add the folder to your commited changes. After that, you'll need to split those file into your gh-pages branch and force them to trigger another build even if the files did not change at all. The rest of the code bellow just removes the commits for the public folder for convenience and removes it from the local filesystem.
Code
git add public
git commit -am ":bug: triggering another jekyll build"
git push origin $(git subtree split --prefix public master):gh-pages --force
git reset HEAD~1
rm -rf public
Tips
If there are uncommited changes that are not part of the final site, you can stash them with the following command.
git stash
Then do the above command to manually force the Jekyll build and unstash them.
git stash pop
References
Online Git Manual
I surmise from other answers that this was once difficult?
Go to Settings->Pages
Just under "Change theme" you'll see a link to the actual Github action labeled "pages build and deployment workflow".
Click Re-run all jobs

Setting Content-Type for static website hosted on AWS S3

I'm hosting a static website on S3. To push my site to Amazon I use the s3cmd command line tool. All works fine except setting the Content-Type to text/html;charset=utf-8.
I know I can set the charset in the meta tag in the HTML file, but I would like to avoid it.
Here is the exact command I'm using:
s3cmd --add-header='Content-Encoding':'gzip'
--add-header='Content-Type':'text/html;charset=utf-8'
put index.html.gz s3://www.example.com/index.html
Here is the error I get:
ERROR: S3 error: 403 (SignatureDoesNotMatch): The request signature we calculated does not match the signature you provided. Check your key and signing method.
If I remove the ;charset=utf-8 part from the above command it works, but the Content-Type gets set to text/html not text/html;charset=utf-8.
Two step process to solve your problem.
(1) Upgrade your installation of S3cmd. Version 1.0.x does not have the capability to set the charset. Install from master on github. Master includes fixes for this (1) bug and this (2) bug that result in failure to recognize the format of the content-type and the "called before definition" problem in earlier versions.
To install s3cmd from master on OSX do the following:
git clone https://github.com/s3tools/s3cmd.git
cd s3cmd/
sudo python setup.py install (sudo optional based on your setup)
Make sure your python libraries are in your path by adding the following to your .profile or .bashrc or .zshrc (again, depending on your system).
export PATH="/Library/Frameworks/Python.framework/Versions/2.7/bin:$PATH"
but if you use homebrew to might cause conflicts so - just symlink to the executable.
ln -s /Library/Frameworks/Python.framework/Versions/2.7/bin/s3cmd /usr/local/bin/s3cmd
Close terminal and reopen.
s3cmd --version
will still output
s3cmd version 1.5.0-alpha3 - but its the patched version.
(2) Once upgraded, use:
s3cmd --acl-public --no-preserve --add-header="Content-Encoding:gzip" --add-header="Cache-Control:public, max-age=86400" --mime-type="text/html; charset=utf-8" put index.html s3://www.example.com/index.html
If the upload succeeds and sets the Content-Type to "text/html; charset=utf-8" but you see this error in the process:
WARNING: Module python-magic is not available...
I prefer to live without python-magic - I find that if you don't specifically set the mime-type, python-magic often guesses wrong. Install python-magic but be sure to set mime-type="application/javascript" in s3cmd or python-magic will guess it to be "application/x-gzip" if you gzip your js locally.
Install python-magic:
sudo pip install python-magic
PIP broke with the recent OSX upgrade so you may need to update PIP:
sudo easy_install -U pip
That will do it. All this works with S3cmd sync too - not just put. I suggest you put s3cmd sync into a thor-type task so you don't forget to set the mime-type on any particular file (if you are using python-magic on gzipped files).
This is a gist of an example thor task for deploying a static Middleman site to s3. This task allows you to rename files locally and use s3cmd sync rather than using S3cmd put to rename them one-by-one.