Your pages are yours and you should have them safely saved and backed up on your own computers. This is easy to do. Just grab the export and keep it safe.
This file is an up to date copy of all pages on the origin server. Hover to be sure it is your server. Then click it to start the download. Save as when finished. export.json
If you are more comfortable on the command line then you can use curl to do the same thing.
curl http://site.org/system/export.json >export.json
I use a cron job to save my singapore hosted wiki on a daily bases. I run this script passing in the domain name for the wiki.
# save a daily backup with a retention of one week # usage: sh mkbackup.sh forage.ward.fed.wiki.org site=$1 day=`date +%a` curl -s http://$site/system/export.json \ > backups/$site.$day.export.json
You will have a file containing every page from your site including all the story items and every journal entry. The file will be organized as an object where keys are page slugs and values are page objects.
See JSON Schema
This file can be large, ranging from dozens to hundreds of megabytes. A good strategy is to save periodically keeping the last few downloads.
You should have regular backups of your own computer. I use Apple's TimeMachine
If your wiki server has lost your pages then you will want it repaired even if it then appears empty. Restoring your saved files to this server should have every page ready to work as usual.
You may need to claim your site again and accept (or modify) what ever flag is created for your new site.
If you are moving to a new host then you will want to redirect your domain name to that server. Since sites are always subdomains, this will always be possible but may not be the policy of the owner of the domain you were hosted within.
You will need to run a script that posts each page to the server. Even a modest size site is probably too large to post in a single operation.
We intend to make restore drag and drop simple. We have yet to determine if browsers are capable of handing 200 Mb drops.