Wikidot already supports some tags needed for SEO (Search Engine Optimization), but it lacks 2 features that are a "must" for SEO.
Canonical page
First of all, we can't set canonical addresses. Setting it is simple:
<head> ... <link rel="canonical" href="http://yourwiki.wikidot.com/canonical-page" /> ... </head>
We could use it with <base> tag too, which might be automatized from the default domain setting:
<head> ... <base href="http://your.default.domain/" /> <link rel="canonical" href="/canonical-page" /> ... </head>
Doing this would assure us that the search engine user would visit our preferred page and, the best of all, the search engine robot would know that there is duplicated content, but it's right place is defined by <link rel="canonical"…>. That means that, eg your start page would only be indexed once (in my case, google indexes both http://wiki.marxismo-online.com.br/ and http://wiki.marxismo-online.com.br/start, which is very bad SEO). If we can do that, our wikis won't be punished for this "duplicated" content.
Robot tag = "noindex"
Secondly, Wikidot could use the code bellow on non-existing pages, so that the search engines won't index pages with that default text:
The page does not (yet) exist.
The page blablabla you want to access does not exist.
- create page
This is not only duplicated content, but also hides our real contents behind these "keywords": "page", "exist", "create"… This is specially true on newly created wikis, not fully populated yet.
The coded needed on these pages is:
<head> ... <meta name="robots" content="noindex" /> ... </head>
The same could be done to admin:manage (anyone wants it to appear on Google?).