This section on the Community is no longer supported, in favour of Wikidot's Official Feedback Site.
It is retained here for archiving purposes.
Wish List
Tags
Posted by zoobab on 24 Aug 2008 14:44, last edited on 05 Mar 2009 08:01
This wish is open |
Description
Wikidot service is not reliable since it relies on a single point of failure: the database.
Wikidot should be available 24h/24h, whether there is fire in the datacenter and an atomic explosion in the Dallas data center.
Ideally, wikidot websites should be ZIP files with a standard datastructure in order to be able to move from one wikidot instance to another.
Kick databases out!
Links to forum Posts
Rate this Wish
We encourage you to "rate" the suggestions. The higher the rating, the better the chance of the wish being granted (well, possibly).
Basically that's what Google did with googlefs.
Kick databases out!
As I know - the database is mirrored permanently and the service is running 7 x 24 h without any planned outage time.
After 40 years in Data Processing is this a very great nonsense to go back to technical non-db files within the open source idea!
Wikidot. rev 1 is open source.
The used data base is free to use - open source.
Everyone can build his own wikidot- service on the world.
You can do it and you have it in YOUR hand than to eliminate every database for your own and use "zip" files.
Good luck!
Service is my success. My webtips:www.blender.org (Open source), Wikidot-Handbook.
Sie können fragen und mitwirken in der deutschsprachigen » User-Gemeinschaft für WikidotNutzer oder
im deutschen » Wikidot Handbuch ?
For a service that already offers free web space and bandwidth without forced ads, you'd have to be nuts to demand a god-knows-how-much restructuring of an entire system. Oh well, peanut butter is still made from nuts.
this is a forum signature. really.
Fereal, thank you for your comment. You replaced my opinion here :) It's unfortunately true, that sometimes people demand, as you said "god-knows-how-much" and they didn't appreciate how much they got for FREE. Sad, but true…
I agree.
This is FREE! Most other free hosting services have downtime from time to time.
If you want to restructure the whole system (which is by no means easy) feel free to do so. There are so many things that one has to do to do restructuring.
However, I reckon wikidot should have some backup servers.
hubewa
Wikidot DOES have backup servers… in fact, each time you save a page, your page is being copied to (at least) 3 different servers.
λ James Kanjo
Blog | Wikidot Expert | λ and Proud
Web Developer | HTML | CSS | JavaScript
Yes, this is correct.
Regarding Zoobab's original post:
Inaccurate, since the service can be started on any of the replicated copies. The main problem here is that Wikidot needs large boxes and it can take time to find new boxes if the main ones have problems. We do not have large, unused boxes lying around (if we were a lot richer, we would).
This is a matter of cost. It is possible to make a 24/7 service but it requires, for example, people available to respond to problems at any hour, and live hot backups for all hardware. So you double your infrastructure costs and double your system admin costs, and then triple your staffing costs after that.
The sane way to manage reliability is to look at different potential types of failure and find the cheapest way to recover from any of them.
So, dead hardware is rare, and means waiting for a replacement. Database damage is most awful, so we replicate. Server stress is most frequent, so we do extensive caching and will continue to do that.
Portfolio
Just to add to and agree with what Pieter has written, in giving performance details most hosters will say that their sites are up 98.4% or 98.5% of the time, none will ever claim 100%. And even moving from 98.5% to 99.0% uptime can be phenomenally expensive. I have always thought that Wikidot to be very reliable indeed even with the recent problems caused by the snow leopard.
Rob
Rob Elliott - Strathpeffer, Scotland - Wikidot first line support & community admin team.
just for interest then, why didn't wikidot use one of those backup servers to replace the broken down server when the wikidot system crashed?
hubewa
Well, funnily enough, they did use those backup servers to replace the broken down server when the Wikidot system crashed. But then the backup servers were overloaded from too many users trying to access it at once, and then they crashed.
λ James Kanjo
Blog | Wikidot Expert | λ and Proud
Web Developer | HTML | CSS | JavaScript
We did actually switch to a backup server for the application engine, but the problem was not the capacity of the servers, it was the massive load from the snowleopard page that was hitting a bug/misfeature in some regular expression library. A particular combination of regular expressions (these are used to denormalize static links in the final HTML) was causing massive memory consumption, and response times of 20-30 seconds per page load. Combined with the tens of thousands of people clicking on the page, and the backup server died just as promptly as the main server.
We of course fixed that regexp issue, and added static html caching for that particular page, both of which brought things back to normal.
Portfolio
Funny. You reckon Wikidot is going to make those backup servers more powerful or does it lack the money?
I reckon backup servers are pointless unless they can support what the main server can.
hubewa
Yeah, it was like that once upon a time. But Wikidot's increasing popularity has been explosive. So much so that the backup servers wouldn't be able to handle what the mainstream server does. I don't think anybody saw that coming.
λ James Kanjo
Blog | Wikidot Expert | λ and Proud
Web Developer | HTML | CSS | JavaScript
Amazingly, wikidot sounds like google.
It gone from relative obscurity to a massive popularity that it is today.
Ah well, gd luck with managing the system in the future
hubewa
Comparing Wikidot to Google! Michal will be flattered, I'm sure :D
λ James Kanjo
Blog | Wikidot Expert | λ and Proud
Web Developer | HTML | CSS | JavaScript
I explained the problems and our proposed solution on the blog. It's not so difficult: we apply static HTML caching to all sites, which reduces load on the main servers by 60-80%. We can then plan for multiple backup servers in the application cloud, and also provide static read-only websites if the app cloud goes down.
Actually deploying servers is expensive (these are large co-located machines and we cannot run cheap boxes like Google does, at least not today), and we prefer at this stage to make the code faster, and less dependent on centralized servers and databases.
The static HTML cache is already working in test, we've not yet deployed it.
Portfolio