cctiddly change workspace URL - wiki

I'm using ccTiddly (using the TiddlyWiki) and I would like to change the URL of my workspaces. I am not sure how to proceed, I tried a first time and everything was corrupted.
The old URL is www.sub.mysite.com/wiki and the new one is www.mysite.com/wiki
I was thinking to move all the files from the FTP folder and then edit the database by removing the "sub" from all the URLs.
Will that work fine?
I only have 2 workspaces with few tiddlers each.
Thanks!

It will depend which version of ccTiddly you are using.
Take a look at the fields in the database. I think you should be able to make the change just by updating the workspace field on the tiddler table but check there are no other fields storing the workspace.
When you said it corrupted everything last time, what exactly do you mean?

Related

How to share Newman htmlextra report?

This may be a basic question but I cannot figure out the answer. I have a simple postman collection that is run through newman
newman run testPostman.json -r htmlextra
That generates a nice dynamic HTML report of the test run.
How can I then share that with someone else? i.e. via email. The HTML report is just created through a local URL and I can't figure out how to save it so it stays in its dynamic state. Right clicking and Save As .html saves the file, but you lose the ability to click around in it.
I realize that I can change the export path so it saves to some shared drive somewhere, but aside from that is there any other way?
It's been already saved to newman/ in the current working directory, no need to 'Save As' one more time. You can zip it and send via email.
If you want to change the location of generated report, check this.

S3/Athena query result location and “Invalid S3 folder location”

Are there particular requirements to the bucket for specifying the query result location? When I try to create a new table, I get a popup:
Before you run your first query, you need to set up a query result location in Amazon S3. Learn more
So I click the link and specify my query result location in the format specified s3://query-results-bucket/folder. But it always says
Invalid S3 folder location
I posted this in Superuser first but it was closed (not sure why...).
The folder name needs to have a trailing slash:
s3://query-results-bucket/folder/
Ran into this earlier in the week.
First, make sure the bucket exists. There doesn't appear to be an option to create the bucket when setting the value in the athena console.
Next, make sure you have the bucket specified properly. In my case, I initially had s3:/// - there is no validation, so an extra character will cause this error. If you go to the athena settings, you can see what the bucket settings look like.
Finally check the workgroup - there is a default workgroup per account, make sure it's not disabled. You can create additional workgroups, each of which will need its own settings.

Migrate ColdFusion scheduled tasks using neo-cron.xml

We currently have two ColdFusion 10 dedicated servers which we are migrating to a single VPS server. We have many scheduled tasks on each. I have taken each of the neo-cron.xml files and copied the var XML elements, from within the struct type='coldfusion.server.ConfigMap' XML element, and pasted them within that element in the neo-cron.xml file on the new server. Afterward I restarted the ColdFusion service, log into cf admin, and the tasks all show as expected.
My problem is, when I try to update any of the tasks I get the following error when saving:
An error occured scheduling the task. Unable to store Job :
'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)',
because one already exists with this identification
Also, when I try to delete a task it tells me a task with that name does not exist. So it seems to me that the task information must also be stored elsewhere. So there when I try to update a task, the record doesn't exist in the secondary location so it tries to add it new to the neo-cron.xml file, which causes an error because it already exists. And when trying to delete, it doesn't exist in the secondary location so it says a task with that name does not exist. That is just a guess though.
Any ideas how I can get this to work without manually re-creating dozens of tasks? From what I've read this should work, but I need to be able to edit the tasks.
Thank you.
After a lot of hair-pulling I was able to figure out the problem. It all boiled down to having parentheses in the scheduled task names. This was causing both the "Unable to store Job : 'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)', because one already exists with this identification" error and also causing me to be unable to delete jobs. I believe it has something to do with encoding the parentheses because the actual neo-cron.xml name attribute of the var element encodes the name like so:
serverscheduletask#$%^default#$%^MAKE CATALOGS (SITE CONTROL)
Note that this anomaly did not exist on ColdFusion 10, Update 10, but does exist on Update 13. I'm not sure which update broke it, but there you go.
You will have to copy the neo-cron.xml from C:\ColdFusion10\\lib of one server to another. After that restart the server to make the changes effective. Login to the CF Admin and check the functionality.
This should work.
Note:- Please take a backup of the existing neo-cron.xml, before making the changes.

Autonomy - Force reindexing without losing data

I need to add a new parameter to my Autonomy HTTP fetch configuration.
ImportFieldOp2=Expand
ImportFieldOpApplyTo2=AUTHOR
ImportFieldOpParam2=;;AUTHOR_M
I stop the HTTPFetch service and, after the config modification, I started the service.
The problem is that the change made is only applied to the new documents.
The old documents don't have the new parameter.
If I remove all the documents indexed works, but is a production environment and I can't do that.
How can i force the indexation of the old documents without losing data?
Thank you.
Look at the Content engine parameter KillDuplicates.
KillDuplicates=DREREFERENCE should do what you want.
A full re-crawl would be required to update the existing documents with the new ones.

How to mix Django, Uploadify, and S3Boto Storage Backend?

Background
I'm doing fairly big file uploads on Django. File size is generally 10MB-100MB.
I'm on Heroku and I've been hitting the request timeout of 30 seconds.
The Beginning
In order to get around the limit, Heroku's recommendation is to upload from the browser DIRECTLY to S3.
Amazon documents this by showing you how to write an HTML form to perform the upload.
Since I'm on Django, rather than write the HTML by hand, I'm using django-uploadify-s3 (example). This provides me with an SWF object, wrapped in JS, that performs the actual upload.
This part is working fine! Hooray!
The Problem
The problem is in tying that data back to my Django model in a sane way.
Right now the data comes back as a simple URL string, pointing to the file's location.
However, I was previously using S3 Boto from django-storages to manage all of my files as FileFields, backed by the delightful S3BotoStorageFile.
To reiterate, S3 Boto is working great in isolation, Uploadify is working great in isolation, the problem is in putting the two together.
My understanding is that the only way to populate the FileField is by providing both the filename AND the file content. When you're uploading files from the browser to Django, this is no problem, as Django has the file content in a buffer and can do whatever it likes with it. However, when doing direct-to-S3 uploads like me, Django only receives the file name and URL, not the binary data, so I can't properly populate the FieldFile.
Cry For Help
Anyone know a graceful way to use S3Boto's FileField in conjunction with direct-to-S3 uploading?
Else, what's the best way to manage an S3 file just based on its URL? Including setting expiration, key id, etc.
Many thanks!
Use a URLField.
I had a similar issue where i want to store file to s3 either directly using FileField or i have an option for the user to input the url directly. So to circumvent that, i used 2 fields in my model, one for FileField and one for URLField. And in the template i could use 'or' to see which one exists and to use that like {{ instance.filefield or instance.url }}.
This is untested, but you should be able to use:
from django.core.files.storage import default_storage
f = default_storage.open('name_you_expect_in_s3', 'r')
#f is an instance of S3BotoStorageFile, and can be assigned to a field
obj, created = YourObject.objects.get_or_create(**stuff_you_know)
obj.s3file_field = f
obj.save()
I think this should set up the local pointer to s3 and save it, without over writing the content.
ETA: You should do this only after the upload completes on S3 and you know the key in s3.
Checkout django-filetransfers. Looks like it plays nice with django-storages.
I've never used django, so ymmv :) but why not just write a single byte to populate the content? That way, you can still use FieldFile.
I'm thinking that writing actual SQL may be the easiest solution here. Alternatively you could subclass S3BotoStorage, override the _save method and allow for an optional kwarg of filepath which sidesteps all the other saving stuff and just returns the cleaned_name.