What was your first reaction when you heard about Blake Lemoinethe Google engineer who announced last month that the AI program he was working on had developed a consciousness?
If, like me, you’re instinctively suspicious, it could have been something like: Is this man serious? Does he really believe what he says? Or is this an elaborate hoax?
Set the answers to those questions aside. Instead, focus on the questions themselves. Isn’t it true that even to to ask they assume something crucial about Blake Lemoine: specifically, he is aware?
In other words, we can all imagine that Blake Lemoine is deceiving.
And we can do that because we assume that there is a difference between his inner beliefs – what he really believes – and his outer expressions: what he claims to believe.
Isn’t that difference the hallmark of consciousness? Would we ever assume the same about a computer?
Consciousness: ‘the hard problem’
It is not for nothing that philosophers have come to call consciousness”the difficult problem”. It’s notoriously difficult to define.
But for the moment let’s say that a sentient being is one who is capable of having a thought and not revealing it.
This means that consciousness would be the precondition for irony, or saying one thing when meaning the opposite. I know you’re ironic when I realize your words do not match your thoughts.
That most of us have this ability—and most of us routinely convey our unspoken meanings in this way—is something that, I think, should amaze us more often than it does.
It seems almost unobtrusively human.
Animals can certainly be funny, but not intentionally.
What about machinery? Can they cheat? Can they keep secrets? Can they be ironic?
Animals can be funny, but not on purpose. Photo: AdobeStock
AI and irony
It’s a truth widely recognized (at least among academics) that every research question you could come up with with the letters “AI” in it is already being studied somewhere by an army of obscenely well-equipped computer scientists — often, if not always, funded by the US Army.
This is certainly the case with the issue of AI and irony, which has attracted a significant number of people lately research interest.
Of course, since irony is saying one thing when you mean the opposite, creating a machine that can detect it, let alone generate it, is no easy task.
But if we could to create such a machine it would have a multitude of practical uses, some more sinister than others.
For example, in the age of online reviews, retailers have become very excited about so-called “opinion mining” and “sentiment analysis”, which use AI to map not only the content, but also the mood of the reviewer’s comments.
Knowing whether your product will be praised or the butt of a joke is valuable information.
Or consider content moderation on social media. If we want to limit abuse online while protecting free speech, wouldn’t it be helpful to know when someone is serious and when they are joking?
Or what if someone tweets that he’s just joined their local terrorist cell or that he’s putting a bomb in his suitcase and heading to the airport? (Never on Twitter, by the way.) Imagine if we could immediately determine if they’re serious, or if they’re just “ironic”.
In fact, given the proximity of irony to lying, it’s not hard to imagine how the whole shadowy machinery of government and corporate surveillance that has sprung up around new communication technologies would find the prospect of an irony detector extremely interesting.
And that goes a long way toward explaining the growing literature on the subject.
AI, from Clippy to facial recognition
To understand the state of current research on AI and irony, it’s helpful to know a little about it: the history of AI more generally.
That history is usually divided into two periods.
Until the 1990s, researchers tried to program computers with a set of handcrafted formal rules for how to behave in predefined situations.
If you used Microsoft Word in the 1990s, you may remember the annoying office assistant Clippy, who kept popping up endlessly to give unwanted advice.
Since the turn of the century, that model has been replaced by data-driven machine learning and neural networks.
Here, huge amounts of examples of a particular phenomenon are translated into numerical values, on which computers can perform complex mathematical operations to determine patterns that no human could ever detect.
Moreover, the computer does not just apply a rule. Rather, it learns from experience and develops new operations independent of human intervention.
The difference between the two approaches is the difference between Clippy and, say, facial recognition technology.
Research on sarcasm
To build a neural network with the ability to detect ironyresearchers initially focus on what some would consider to be the simplest form: sarcasm.
The researchers start with data stripped from social media.
For example, they can collect all tweets labeled #sarcasm or Reddit posts labeled /s, an abbreviation Reddit users use to indicate they’re not serious.
It’s not about teaching the computer to recognize the two separate meanings of a particular sarcastic post. Meaning really doesn’t matter.
Instead, the computer is instructed to look for recurring patterns, or what one researcher calls “syntactic fingerprints”: words, phrases, emojis, punctuation, errors, contexts, and so on.
In addition, the dataset is strengthened by adding more streams of samples – other messages in the same threads, for example, or from the same account.
Each new individual example is then run through a series of calculations until we arrive at a single determination: sarcastic or not sarcastic.
Finally, a bot can be programmed to reply to any original poster and ask if they were being sarcastic. Any response can be added to the computer’s growing mountain of experience.
The success rate of the most recent approaches to sarcasm detectors an amazing 90% – bigger, I suspect, than many people could reach.
So, assuming AI will continue at the pace that took us from Clippy to facial recognition technology in less than two decades, could ironic androids be a long way off?
What is irony?
But isn’t there a qualitative difference between sifting through the “syntactic fingerprints” of irony and actually understanding it?
Some would suggest: not. If a computer can be taught to behave exactly like a human being, then it doesn’t matter if there is a rich internal world of meaning behind its behavior.
But irony is arguably a unique case: it is relies on on the distinction between external behavior and internal beliefs.
Here it may be worth remembering that while computational scientists have only recently taken an interest in irony, philosophers and literary critics have been thinking about it for a long time.
And perhaps exploring that tradition would shed old light, as it were, on a new problem.
Of the many names one could mention in this context, two are indispensable: the German Romantic philosopher Friedrich Schlegel; and the post-structuralist literary theorist Paul de Man.
For Schlegel, irony means not only a false, external meaning and a real, internal one. Instead, ironically, two opposing meanings are presented as equally true. And the resulting indeterminacy has devastating consequences for logic, especially the law of non-contradiction, which states that a statement cannot be true and false at the same time.
De Man follows Schlegel on this point and in a sense makes his insight universal. He notes any attempt to define a concept of irony will no doubt be tainted by the phenomena it purports to explain.
Indeed, the Man Believes all language is tainted with irony and includes what he calls “permanent parabasis.” Because people have the power to hide their thoughts from each other, it will always be possible—permanently possible—that they don’t mean what they say.
In other words, irony is not one of many languages. It structures – or rather, haunts – every language and every interaction.
And in that sense it transcends the order of proof and calculation. The question is whether the same applies to humans in general.
This article was republished from The conversation under a Creative Commons license. Read the original article.
Contents