You are familiar, of course, with Urban Dictionary, in which users submit definitions of various slang terms, and the most popular (presumably the most accurate) definition gets voted up to the top of the stack of definitions underneath each slang term. That's a pretty smart way to build a dictionary of slang, which is, of course, the set of all words whose definitions are loose, fluid and pretty much whatever the world defines them as.
Let's extend this model from slang terms to all English language terms. And also, terms in other languages. And also, multi-word sentence fragments. And also, complete sentences. Instead of a definition, what you submit is a translation into some other language. And then you vote up or down on other translations of the same entry. That could work, couldn't it?
There are a few more critical components to this idea.
One is an API which can be integrated into, among other things, an instant messaging client. This means that when somebody sends you a sentence in a language you don't understand, instead of copying and pasting that into Babel Fish, your client automatically sends out a request and receives a selection of possible translations in return. You then have the option of selecting from the list the translation which turned out to be the most accurate - or responding with a more finely-tuned translation of your own.
Another (the slightly more questionable component of my idea) is the bit relating to what the machine does if the specifically requested original phrase isn't present in the database. In a situation like that, we don't particularly want the machine to just go "urk, I got nothing". Even a wild guess pieced together from direct dictionary lookups would be better than nothing - because then the receiving user can go, "Well, that's just nonsensical, but if I alter a few words I can repair it, okay, here is a slightly better translation" and send it back. Thus, the translation stored in the database will iteratively migrate towards something which is more accurate. I hope.
The last hurdle is the problem of there being millions and squillions and quintillions of possible sentences, many of them differing from each other by only a very small, trivial "edit distance". I hope to solve this by drastically restricting the maximum length of a message in the database. I'm talking "atoms": five words, or 64 bytes, or something like that. This could be the fatal flaw in my plan, but the key point of my plan is that it must require no linguistic skill on my part, which means that it cannot require any kind of language-specific translation intelligence, only basic algorithms and raw data.
My belief is this: it is easier to make a human being manually break up a complicated sentence into shorter, simpler, machine-translatable sentences than it is for a machine to accurately translate the original longer sentence. If we can train people to communicate with greater precision and terseness - and we're well on our way, we already have Twitter and "txtspk" - then we can effectively train ourselves to communicate in an unambiguous sub-language of English/French/Chinese/whatever, which a machine can translate perfectly.
Obviously, some boffin could build on this data set (once it's populated) and make an algorithm capable of translating longer sentences by referring to the various shorter sentences, but that's for another day.
This idea, like all of mine, is unrefined. Somebody want to attempt it?