Foswiki: Uploading and downloading topics without FTP - wiki

I have a Foswiki wiki on a server. Is it possible to script the following without FTP access (for various reasons I can't use it):
Download a topic's wikitext, modify it locally, then upload it again (overwriting the topic)
Upload wikitext to a new topic
I've been doing these tasks manually, but I'd like to automate them. I've looked into the Foswiki API and a few plugins, but nothing seems capable of doing this.
Is there a way? (any programming language)

If you have web access, you could drive the bin/view and bin/save scripts remotely from a script.
Take a look at our BuildContrib upload target for an example. It gets a strikeone key and downloads the original topic to recover any form data. It then uploads the topic text, creating a new version. It's written in perl, and uses LWP.
https://github.com/foswiki/distro/blob/master/BuildContrib/lib/Foswiki/Contrib/BuildContrib/Targets/upload.pm

The following isn't(!) the right solution (sure exists an nice Foswiki-way approach), but if you know perl, you can do anything with the:
Install Firefox
install MozRepl addon into it
Install the WWW::Mechanize::Firefox perl module
Now, you can script anything what you can do directly from the browser, e.g. logging into the Foswiki, click buttons, save topics, etc..etc. Drawback - it isn't an easy way - you need to know many details.
Myself using this technique for testing.

Related

Enable inline editing for the Go code in the AWS Lambda

As inline code editing is not enabled for the Go code on AWS Lambda, I am trying to create a Google Chrome Extention to be able to edit the Go code by referring to the text or zip code on the S3 bucket. It would be nice if I could also deploy the updated Go code on the Lambda.
I think I will have to perform the following steps from the extension-
Get the Go code from the S3 bucket or Github
Update it
Create a zip file from the updated code
Upload the zip file to the S3 bucket or Github
Deploy the updated zip file on the Lambda
I have no idea if it is a good approach or if there is any other approach possible for this. I would appreciate it if anyone can suggest to me a better approach or tell me if what I am thinking is feasible or not.
I like the idea, but unfortunately I am not sure if that is a good idea.
Let me explain:
All the languages that AWS Lambda supports which allow inline editing are more or less interpreted languages: Javascript, Python etc.
The AWS runtime for those languages reads plain text files and compiles/runs them.
Since you deploy plain text files and the runtime takes care of running them, the AWS Lambda console allows you to edit those files.
Go on the other hand, as well as supported languages like Swift or Java, need to be deployed as a "binary" (I use air quotes because Java JAR is strictly seen not a binary but byte code which is then interpreted by the JVM ..) to AWS.
The AWS Lambda runtime for those languages expects a binary and not plain text. That is why you can not edit the code of Lambdas using those runtimes in the AWS console.
So even if you would open that ZIP, you would not find editable code.
Of course you could put the binary and the plain text code in that ZIP and then when you open that ZIP through your Chrome extension, you could show the plain text code to the user.
But then there is the matter of compiling the code into a binary that the AWS Lambda Go runtime can actually run.
So you Chrome extension would need to bundle a Go compiler. Not sure if that is possible. But I am sure it would not be trivial.

Headless chrome cli in Production

I will be doing some pdf generation for my application. Currently, my plan is to create HTML using templates and convert them to PDF.
The pdf's aren't long. Maximum 3 pages. And approximately we will be making approx 100 docs in a day.
I was happy with the results I got from chrome --headless in my local machine. I called the cli command directly from my clojure code. So far so good. Looking at the number of wrappers available (Browserless, Chromeless, Puppeteer, ...) I'm not sure about the scalability factor in production.
Is it safe to use/call the chrome cli directly in production boxes?
What will I miss if I skip these wrappers?
My server side stack is Clojure/Compojure/Leiningen. Any insights/alternatives are appreciated.
I'm using Athena PDF for pdf generation in combination with Clojure:
https://github.com/arachnys/athenapdf
It has a REST interface. Since it runs in Docker its easy to scale.
Instead of detouring through html and chrome, I'd just use a pdf creating library such as clj-pdf. Here is a nice blog post about it.
p.s. If you dont mind running a third program to generate the pdf, I would have used emacs with org-mode (or heck, even writing it in elisp altogether) ;)

How do I get Google Cloud Source file view to offer the EDIT button?

The GC docs say
and show
but I get no EDIT button
How do I get the EDIT button?
Setup is
Thanks, ChrisJJ. As we head towards GA for Cloud Source Repositories, we're trimming out underused and half-baked features, of which this is both. It's particularly half-baked because you can't use it to create new files or folders, move files or folders around, delete files, keep files in sync with the cloud shell, etc.
So, we've pulled this feature (and are updating the docs appropriately). However, if you'd like to edit your files on the web, you can do so with the Cloud Shell directly (via nano, vi or emacs) or you can use the new code editor feature described here: https://cloudplatform.googleblog.com/2016/10/introducing-Google-Cloud-Shels-new-code-editor.html
I think you'll find that this is a MUCH more full-featured editor experience and we're continuing to look at ways to make it even better.

About Sitecore Backup

I am trying to backup a whole Sitecore website.
I know that the package designer can do part of the job, but not entirely.
Having a backup is always a good way when the site is broken accidently.
Is there a way or a tool to backup the whole Sitecore website?
I am new to the Sitecore, so any advise is welcome.
Thank you!
We've got a SQL job running to back-up the databases nightly.
Apart from that, when I deploy code and it's a small bit I usually end up backing up only the parts I'm going to replace. If it's a big code deploy I just back up the whole website (code-wise anyway) before deploying the code package.
Apart from that we also run scheduled backups of the code (although I don't know the intervals), and of course we've got source control if everything else fails.
If you've got an automated deployment tool you could also automate the above of course.
Before a major deploy of content or code, I typically backup the master database and zip everything in the website directory minus the App_Data and temp directories. That way if the deploy goes wrong, I can restore the code and database fairly quickly and be back to the previous state.
I have no knowledge of a tool that can do this for you, but there are a few ways you can handle this in an easy way:
1) you can create a database backup of the master database, but this only contains content and no files like media files that are saved on disk or your complete and build solution. It is always a good idea to schedule your database backup every night and save the backups for at least a week or more.
2) When you use the package designer, you can create dynamic pacakges that can contain all your content, media files and solution files on disk. This is an easy way to deploy the site onto a new Sitecore installation all at once, but it requires a manual backup every time.
3) Another way you can use is to serialize your entire content-tree to an xml-format on disk from the Developer tab. Once serialized, you can revert them back into the content tree.
I'd suggest thinking of this in two parts, the first part is backing up the application which is a simple as making sure your application is in some SCM system.
For that you can use Team Development for Sitecore. One of it's features allows you to connect a Visual Studio project to your Sitecore instance.
You can select Sitecore items that you want to be stored in your solution and it will serialize them and place them into your solution.
You can then check them into your SCM system and sleep easier.
The thing to note is deciding which item to place in source control, generally you can think of Sitecore items has developer owned and Content Editor owned. The items you will place in your solution are the items that are developer owned; templates, sublayouts, layouts, and content items that you need for the site to function are good examples.
This way if something goes bad a base restoration is quick and easy.
The second part is the backup of the content in Sitecore that has been added since your deployment. For that like Trayek said above use a SQL job to do the back-ups at whatever interval your are comfortable with.
If you're bored I have a post on using TDS (Team Development for Sitecore) you can check out at Working with Sitecore, Part Nine: TDS
Expanding bit more on what Trayek said, my suggestion would be to have a Continuous Integration (CI) and have automated deploy using Team City.
A good answer is also given here on Stack Overflow.
Basically in your case Teamcity would automatically
1. take back up of the current website (i.e. code) and deploy the new code on top of it.
2. Scripts can also be written to take a differential backup of the SQL databases, if need be.
Hope this helps.
Take a look at Sitecore Instance Manager module. Works really well for packaging entire Sitecore instance.

Web API (like github REST API) for a personal Git server repo to enable "git log"?

I probably end up re-inventing parts of the github REST API for my own repo server. But maybe there is some server script to do that already out there? Or maybe you have other suggestions?
This is my use case:
I am developing a Firefox Extension, that shall display the data of a
git log -- <path>
I always could write a little server script that implements the well developed JGit and does the "git log" command there. But then, the FF extension depends on that server script ;(
I was wondering, if there exists something like the github REST API for "not-github"-repos that would be more standard as my little server script?
I also thought about a Git JS Client, like Git.JS (apparently the only JS Client; workes with node.js; Unfortunatly the project is no more active and has no documentation.) . However, I don't need a full client. I just want to retrieve some information Read Only from the remote master repo.
Although I am late to the party I have noticed a few a might contribute to the answer.
Orion Git API Orion is an Eclipse project
RESTFul Git from Hulu on github
If you haven't tried it, GitBlit is a VERY cool option. I have multiple installations on a few windows dedicated servers that I pull together using a REST API. I had it up and running in 5 minutes, in Windows, using the "GO: Single-Stack Solution".
Gitblit GO is an integrated, single-stack solution based on Jetty.
You do not need Apache httpd, Perl, Git, or Gitweb. Should you want to use some or all of those, you still can; Gitblit plays nice with the other kids on the block.
This is what you should download if you want to go from zero to Git in less than 5 mins.
I would say you definitely need to implement some kind of server-side code by your own.
You can choose any server-side language you like. I believe ruby or python will work fine. Than create simple web-site with one page embedding output of git log according to the parameters given.
All other options will not work for you, I believe. You cannot remotely access git repository's history due to distributed nature of git — you can read history of your local repository only.
Reading that web-page by your extension and parsing output will give you what you need.