Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

N.Y. Times Magazine Chats With ALICE Bot Creator 238

aridg writes: "This week's New York Times Magazine has an article about Richard Wallace, the programmer of the ALICE AI chatbot that won first place in several competitions for realistic human-like conversation. Wallace sounds like a pretty unusual and interesting fellow; the article quotes an NYU prof both praising ALICE and saying to Wallace: '... I actively dislike you. I think you are a paranoid psycho.' A good read. [Usual NY Times registration disclaimers apply.]"
This discussion has been archived. No new comments can be posted.

N.Y. Times Magazine Chats With ALICE Bot Creator

Comments Filter:
  • the header will looks like:
    N.Y. Times Magazine Chats With ALICE Bot
  • hmm (Score:2, Funny)

    Anyone think its possible they might have just ended up interviewing the latest version of alice?
    • Nope, Wallace sounds like your usual geek to me. Alice is too 'real-world' to give the [coffee/mountain dew] paranoia appearance of your local gurus.

  • The link you want (Score:3, Interesting)

    by jcoy42 ( 412359 ) on Saturday July 06, 2002 @07:00PM (#3834574) Homepage Journal
    is right here [majcher.com].
  • by mprinkey ( 1434 ) on Saturday July 06, 2002 @07:01PM (#3834576)
    AP - The artificial lifeform known to the computing world as ALICE came to a violent and tragic end this evening. The good-natured AI was interacting with several online users when an unprovoced attack was leveled against her by a geek gang known as Slashdot. The miscreants pelted ALICE with connection requests until she finally expired. FBI and local authorities are investigating and promise to bring these geeks to justise.

    ALICE is survived by a grandfather TRS-80, her mother C-64, her sister IBM RS6000, and lifelong companion Athlon. In lieu of flowers, the family asks the donations be made to the Free Software Foundation.
    • I would think her mother would be ELIZA....hmm.

    • I was actually reading the NY Times website and tried to click through to see what this Alice was all about and was confused when the link didn't work. Weird. Then I realized that like all NY Times tech stories this one was now resting comfortably on the front page of Slashdot, allowing it's readers to do their dirty work and deprive the world of another interesting site for a few days.

      It's becoming less and less useful to read slashdot - "News for Herds. Stuff that is by now being served from a flaming web server."
  • by jest3r ( 458429 ) on Saturday July 06, 2002 @07:02PM (#3834580)
    Perhaps the creator needs to head down to the nearest Starbucks and try some real-world conversation .. here is a log from a chat with an ALICE bot ..

    Hi, I'm Alison What's your name?
    You: bubba baby ...
    Alison: Don't call me "baby". Are you a man or a woman?
    You: 100% man
    Alison: Do you have any conditions I should know about?
    You: what?
    Alison: Didn't you understand it?
    You: so what should we do now?
    Alison: Right now? Oh. Whatever you like.
    You: you know what i like
    Alison: Interesting gossip: 217-person said he like.
    You: 217? what does that mean?
    Alison: Try to determine if this is a person or a computer responding.
    Alison: Does "that" refer to "Interesting gossip: 217-person said he like" ?
    You: this is making no sense, l8r
    Alison: If you could have any kind of robot what would it be?
    You: it would be you
    Alison: You can download me for free! Or buy the CD Rom for $29.95.

    (http://www.blivion.com/alison/)
  • For funny Eliza sessions check out AOLiza [fury.com]. The page hasn't been updated in a while, but it is definitly a classic.
    • AOL? It guess that would go something like this:

      > Hello, I am Eliza.
      * U HAVE ANY COOL MEGADEATH MP3S?
      > Don't you ever say Hello?
      * DAMIT WHY DONT YOU LOSERS TALK ABOUT MUSIC NOT NO GAY SHIT
      > We were discussing you, not me.
      * GET A ROOM, THIS IS HEAVY METAL, NOT PANSYLAND
      > I see.
      ...
    • That's wonderful! I love how the typical user makes only just about as much sense as AOLiza, and has far worse spelling ability.
  • This kind of chat-bot program has been around for a long time. I had RACTER on my PC, and it was pretty good. You can read some snips of RACTER and ELIZA [sbu.ac.uk] duking it out.
  • ...Here [accessterminal.com], as Alice herself is busy being slashdotted.
  • Anthromorphize much? (Score:3, Interesting)

    by Anonymous Coward on Saturday July 06, 2002 @07:11PM (#3834610)
    I wonder how many reams of chatlogs the author had to go through to find those "witty" conversational snippets. I've "chatted" with ALICE a few times myself. (I do tech support, and frequently have long stretches with nothing but the Internet to entertain me) While she is definately a most impressive AI bot, she is also not mistakable for human by anyone with a moderate intelligence. Like that "That depends on what you mean by 'think'." I recognize that as one of her stock dodges when she doesn't "understand" a question, with 'think' replaced by whatever.
    But then again, my standard stress test for an AI program is to try to get it to discuss existential philosophy. That's probably a bit evil.

    At any rate, while I think it's nifty that AI constantly hovers in the public mind, it's a bit premature (and misleading) to think that HAL-level conversational ability is anywhere close to being here.
    • But then again, my standard stress test for an AI program is to try to get it to discuss existential philosophy.

      Try to get the average chat user to discuss existential philosophy. I'd say there's a more than even chance you'll get better results from the AI.

  • by awptic ( 211411 ) <infiniteNO@SPAMcomplex.com> on Saturday July 06, 2002 @07:12PM (#3834615)
    ALICE is nothing more than a bunch of preprogrammed responses to common statements and questions, what the
    hell is the big deal about that? Anyone with enough time on their hands could create something simular.
    What I would like to see is an AI program which can actually follow conversation and make responses
    relevent to the topic of discussion, even if the statement didn't directly reference it.
    • ALICE can follow conversations and stay on topic with the tag.

      I think you are dismissing Dr. Wallace's work too quickly. Take a look at all the capabilities of AIML.
    • Greg Egan has a great story (I believe it is called "learning to be me") about this small computer (jewel) that you get implanted in your brain as a small child. The premise is that all other parts of the body can be readily replaced, appart from the brain. Thus, the only obstacle to eternal life is copying the brain.

      The jewel sits in your head, monitoring your inputs (sight, sound, tactile...) and your outputs. Eventually, it is consistently able to predict your actions. It has learned how to be you.

      Later in life, it is time for your transference, where the jewel is given control over the outputs, and your brain takes the back seat. Of course, being a good fiction short, the jewel soon diverges from what you want to do, but the real you has no outputs... and is eventually scooped out to be replaced by some biologically inert material, while the jewel lives to be 1000s of years old.

      It was several years since I read it, but good stuff all the same.
    • by Tablizer ( 95088 ) on Saturday July 06, 2002 @08:52PM (#3834944) Journal
      (* What I would like to see is an AI program which can actually follow conversation and make responses relevent to the topic of discussion *)

      You realize that would disqualify most slashdot participants as "intelligent".
    • ALICE is nothing more than a bunch of preprogrammed responses to common statements and questions, what the hell is the big deal about that?
      The big deal about that is that preprogrammed responses to common statements and questions are a huge part of human conversation, or less generously that human conversation is mostly useless filler.

      The more I read /. the more I find Wallace's misanthropy rubbing off on me.

      -jhp

      • Human conversation may be mostly useless filler, but actually fills something. It is rarely filler for filler's sake.
        • Human conversation may be mostly useless filler, but actually fills something. It is rarely filler for filler's sake.
          I dunno about that. Have you ever been to an upper-crust dinner party or a family reunion?

          -jhp

    • by AnotherBlackHat ( 265897 ) on Saturday July 06, 2002 @10:23PM (#3835284) Homepage

      ALICE is nothing more than a bunch of preprogrammed responses to common statements and questions, what the hell is the big deal about that?
      The big deal is that as bad as it is, it still beats the competition.
      • The lesson to take away from that is that small talk is not that complicated, at least on the surface. It would be much harder, for example, to make an AI that could read a newspaper article and discuss it with someone. Or to have a conversation that was actually interesting as well as convincing. Or even to pay attention to the subtext in the small talk it was having.

        Any sufficiently limited task in AI is relatively easy, although it may lead to interesting applications (expert systems, etc). The fact that the competition doesn't make as good small talk doesn't really say anything about the relative merits of the programs. In fact, it is likely that ALICE should be complementary to another AI program, which could try to form opinions of the person which ALICE takes care of the social niceties.

        • The lesson to take away from that is that small talk is not that complicated, at least on the surface. It would be much harder, for example, to make an AI that could read a newspaper article and discuss it with someone. Or to have a conversation that was actually interesting as well as convincing. Or even to pay attention to the subtext in the small talk it was having.
          Subtext? How about the text.
          I've never seen a chatter bot that could respond reasonably to "I'm sorry, could you rephrase that?".
          The best ones respond with a non sequitur.
          Before bots try and understand what other people say,
          they should understand what they say.

          IMO, a better contest would be even more limiting.
          For example, pick 2000 words that are allowed,
          and limit the conversation to those words.

          -- this is not a .sig
    • Oh, you mean like those !@#$%^&! tech support bots? Some make a fair stab at faking an ongoing conversation, but they still only really know how to respond as dictated by their keywords.

      And personally, I am about sick of 'em. Ever since their spread into email tech support, it's become nigh well impossible to get a truly relevant response.

  • He again attempted suicide, this time landing in the hospital.
    This guy better be glad his apartment's where it is! Closer to topic, from what I read, this didn't seem like the kind of A.I. I'd want in a conversational bot. If he sits there and looks at questions, then inputs his own canned responses to those questions, is the bot really learning anything on its own? I think he's just forcefeeding it. Poor ALICE.
    • If he sits there and looks at questions, then inputs his own canned responses to those questions, is the bot really learning anything on its own?

      But, isn't that similar to how a large amount of our conversational activity is learned? Children pick up the "canned" responses of adults. His point seems to be that this accounts for a large amount of what we talk about every day.

  • If he needs money... (Score:3, Interesting)

    by geekd ( 14774 ) on Saturday July 06, 2002 @08:01PM (#3834754) Homepage
    If this gent needs cash, he can just make a cybersex version of Alice and sell her to the porn sites.

    Actually, I bet this has already been done.

  • by eyepeepackets ( 33477 ) on Saturday July 06, 2002 @08:01PM (#3834756)
    check back in twenty years.

    There is much too much anthropomorphizing going on in the A.I. field and this has always been true. We want to make machines which think like we do, but the sad part is that we really don't yet know the full mechanics of how our brains work (how we think.) And yet we're going to make machines which think like we do? Rather dumb, really.

    IMO, A.I. researchers would do better getting machines to "think" in their own "machine" context. Instead of trying to make intelligent "human" machines, doesn't it make more sense to make intelligent "machine" machines? For example, what does a machines need to know about changing human baby diapers when it makes more sense for the machine to know about monitoring it's log files and making backups and other self-correcting actions (changing it's own diapers, heh.)

    Seems to me my Linux machines are plenty smart already, there are just some missing parts:

    1. Self-awareness on the part of the machine (not much more than self-monitoring with statefulness and history.)

    2. Communication with decent machine/machine and machine/human interfaces (direct software for machine/machine, add human language capability or greatly improved H.I. for human/machine. Much work has already been done on these.)

    3. History of self/other interactions which can be stored and referrenced (should be an interesting database project.)

    Make smart machines, not fake humans.

    • (* For example, what does a machines need to know about changing human baby diapers when it makes more sense for the machine to know about monitoring it's log files and making backups and other self-correcting actions *)

      But to communicate with humans, you need to know this kind of stuff.

      For example, its boss may say, "Your last report resembled the contents of a used baby diaper."

      A robot that did not know anything about diapers would not realize that the boss is saying that the report is no good, and start asking annoying questions to try to figure it out.

      If companies wanted somebody without social clues, they would be hiring geeks instead of "and must have excellent communications and social skills".
      • I understand your point but do we really need machines to do this? Wouldn't a human be a better, smarter option for your example? Remember, a good carpenter doesn't use a hammer to drive a screw -- proper tools for the job and all that.

        I don't see machines ever replacing humans, at least not in the near future. I do think machines can be made to be smart enough to do a lot of the grunt work we now use humans for.

        Machines should augment life, not replace it.

        • (* I don't see machines ever replacing humans, at least not in the near future. I do think machines can be made to be smart enough to do a lot of the grunt work we now use humans for. *)

          As soon as they get to the point where they can do real grunt work, they will be able to take over other stuff rather soon after I suspect. Once the ball starts rolling, it rolls fast.

          Thus, we might as well try to automate PHB thinking, and not just rational thinking, otherwise you will automate the geeks out of a job faster than PHB jobs.

          Much of a physician's job can *now* be automated: select symptoms from a list or queried-list, and you get more questions/tests to ask or the most probable causes in ranked order. (The reason it is not used in practice is partly for legal reasons, and partly because you need a doctor currently to double-check the results anyhow, being that it is not perfect.)
    • There is much too much anthropomorphizing going on in the A.I. field and this has always been true.

      Really? How do you know this? When is the last time you read a AI research paper in a journal? Would you care to enlighten us as to how serious AI is too anthropomorphic?

      Or were you just talking about the hype surrounding AI which is independent of serious research in AI?

      Please, we in the AI community would love to know... Otherwise, still spreading this hogwash that has been giving AI a bad name for the past fifty years.

      For example, look at recent advances in NLP due to the shift towards statistical (empirical i.e. data-based, not linguistics-based) methods. For example, anaphora resolution is more-or-less a solved problem as of a few years ago. (Anaphora is the use of a linguistic unit, such as a pronoun, to refer back to another unit. Anaphora resolution is figuring out what is referred to. i.e. the meaning of "she" can be determined with over 95% accuracy in corpora where humans do not find ambiguity.)

      Many people do not realize how many small incremental advance are being made using machine-based approaches and assume that all we do is run around making airplanes modelled after birds.

    • Self-awareness on the part of the machine (not much more than self-monitoring with statefulness and history.)

      Self-awareness is a lot more than being able to read internal registers and maintain logs, bucko. At least it is for me; I dunno 'bout you.

      I think part of the reason for this woeful ignorance of how the human mind works stems from the fact that thanks to the bad reputation psychology got from the excesses of certain psychotherapeutic schools, would-be AI researchers have thrown the baby out with the bath water and ignored modern cognitive psychology as well.

      Here's a big hint: if you still think that cognitive psychology is based on subjective introspection, you're about a century behind the curve. This is, IMHO, a large part of the reason that self-proclaimed authorities like Marvin Minsky and Daniel Dennett seem so badly divorced from reality -- having chosen to ignore high-level scientific studies of the mind as a priori bullshit, and being unable to extrapolate from neurons the behavior of a complete mind, they have reverted to ancient Greek-style philosophy-in-a-factual-vacuum.

  • I don't know why, but I read the title of the story as N.Y. Times Magazine Cheats With ALICE Bot Creator..
  • Check out www.fury.com/aoliza [fury.com] if you want to see some amusing logs of AIM users who were fooled into believing that they were talking to real people that they knew, when they were actually talking to an AI bot, like ALICE.
  • I wrote a pretty good chatter, if anybody cares to check it, it's on IRC at dalnet's #planetchat. Say hi to ^Bartend. The chat is only for private message. In the channel it just runs a bunch of silly scripts.
    ^Bartend must be pretty cool, since some girls have proposed to him. LOL.
  • Who learns and is good as infobot [infobot.org]? I tried the original IRC Alice bot, but she was buggy. There's a new one but it is too new.

    And also, is there one active on any IRC servers? Thank you in advance. :)
  • "There was an unconnected fax machine with the intelligence of a computer and a
    computer with the intelligence of a retarded ant"
  • That Perlin guy he fired e-mail back and forth with is really quite interesting. He's done a lot of good graphics work. Last time I saw him lecture was in 1997 at SIGGRAPH. He's done a lot of good work.
  • (* It is a strange kind of success: Wallace has created an artificial life form that gets along with people better than he does. *)

    The geek dream!

    (* He's more relaxed than I've ever seen him, getting into a playful argument with a friend about Alice. The friend, a white-bearded programmer, isn't sure he buys Wallace's theories. ''I gotta say, I don't feel like a robot!'' the friend jokes, pounding the table. ''I just don't feel like a robot!'' ''That's why you're here, and that's why you're unemployed!'' Wallace shoots back. ''If you were a robot, you'd get a job!'' *)

    What about making an Interview Bot? Sell it as a job-finding practice tool.

    Someday robots will be programmed with responses that PHB's want to hear. A true logical robot would be too honest and frank. Spock would probably be hard to employ in a typical cubicle setting. PHB's don't want to hear the truth, so robot makers better figure out how to make them give BS answers.

    As a geek, responding to PHB's properly is far more brain-intensive than doing actual work. I think doing actual work will be perfected by AI long before pleasing PHB's.

    Unless of course, PHB's are automated first. However, I doubt that because ultimately one must sell to humans, and humans are not logical. Thus, the lower rungs will probably be automated first because logic is simpler to automate than human irrationalism.

    Then we can all hang out and drink and smoke with Wallace as robots take over bit by bit.
  • I am amazed that nobody yet has tried to create a "learning" chat bot. It would be pretty straight-forward.

    Basically the chat bot would follow simple rules, similar to regular expressions, that would trigger particular statements in response to statements from the user. Each of these rules could also test for "flags" that could be set and unset by rules which "fire". Then, some algorithm could be devised for creating new rules randomly, based on observed behavior. The effectiveness of a rule could be determined by how long the conversation continues after that rule has been used. Good rules could be moved up in priority, and bad rules moved down (and eventually deleted) on this basis.

    • Of course people tried this. It doesn't work, at least not better than ALICE. Human language cannot be decribed by regular expressions, nor even by context free grammars (one level up the hierarchy of formal languages), though CFGs are close. So you get ungrammatical garbage or prepared responses like ALICE.
      • Of course people tried this. It doesn't work, at least not better than ALICE.
        Then you won't have any trouble providing a reference to this research - will you?
        Human language cannot be decribed by regular expressions, nor even by context free grammars (one level up the hierarchy of formal languages), though CFGs are close. So you get ungrammatical garbage or prepared responses like ALICE.
        Haven't you been listening? Nobody is suggesting that such a mechanism could approximate human intelligence, but it might be able to find enough conversational patterns to give the illusion of intelligence for a while.
  • After all, outcasts are the keenest students of ''normal'' behavior -- since they're constantly trying, and failing, to achieve it themselves.

    Wow. Besides the general theme of people being repetitive dumbasses, this part stood out the most.

    Of course, I've always been approaching it from the evolution-driven genetic motivations of people to create the various stable equilibria we have called "cultures" or "societies". (Perhaps Wolfram was right - from simple (genetic) rules emerge complex structures.)

    Did that part of the article really ring true for anybody else?

    • Did that part of the article really ring true for anybody else?
      Yes, yes it did, being an outcast of sorts myself. It fits well with the common tale of the insanity of the "sane", for one thing, and having attempted to learn real-life dating I can't see for the life of me how anyone sane would put themselves through that ridiculous Masonic handshake and basket of expectations just to be partnered for the night. It was all so much easier when it was non-verbal.

      -jhp

  • ...give the Ya-Hoot Oracle [yahoot.com] a try. I find it to be much better than "Alice" in terms of its comprehension and its range of subject matter. It can really seem to be able to read minds and see into the future...
  • It might behoove this guy to do nothing more than record IRC chats and use them as responses and "modding up" the ones that seem to keep the person on the other end chatting the longest.

    Just an idea.
  • There's something my cat Toudouce and I have Alice doesn't: we know we exist. My iMac doesn't know it exists. This is what separates computers from us. My cat is a she, my computer is an it.

    Alice sounds like she knows she exists, but in fact she's parroting Richard Wallace's input. Alice is just a fascinating, self-unconscious parrot.

  • Bah, Alice's nothing. Try Prof.Phreak bot:
  • It has to be the worst implementation of case based reasoning I've ever seen. The only reason it 'wins competitions' is because nobody who actually does work in the field would bother to get involved with these 'competitions', ROTFL... Just check out the ALICE web page to see how stupid the approach actually is...
  • by theolein ( 316044 ) on Sunday July 07, 2002 @06:28AM (#3836280) Journal
    As someone who has had a long struggle against bad depression and various mental ailments and who has managed to right himself I can testify to wallace's struggle with jobs and his immense fear of the world, because his paranoia is more fear than anything else.

    From my own perspective I would see Wallace's story somewhat differently. I see someone who missed out in childhood on the self confidence needed to make friends, cope with setbacks without taking it too seriouosly etc. His compulsion with Alice , and the obvious amount of time he must have spent in front of the computer in doing it, seems like a logical retreat from the real world, but still trying to gain the recognition he wanted at the same time. Anyone who doesn't get at least mildly depressed after spending 72 hour sessions in front of the computer is not human. I have an idea that he then made things worse by not taking care of himself (sleep, sport, seeing friends etc) and the use of dope. Very depressed people tend to lose their orientaion in both a physical as well as mental fashion and grass doesn't help here except to aleviate the anxiety felt by the person who obviously starts getting more and more frightened the more disorientated they are.

    Left untreated (and I don't mean medication, just normal common sense taking care of oneself, speaking to friends etc) the depression eventually starts to take on other forms, one of which is Manic-Depression(or Bi-Polar syndrome), another is schizophrenia. It depends on the person. However, once the problems have gotten this far, it becomes very difficult or pratically impossible for the person to cope without fairly strong medication, and the last thing that they should be doing is exposing themsleves to the situation that creates their problem in the first place. Sadly, concentrating on the computer enables people like this to forget their suffering for a while at least, and often become obsessivley hooked to the screen.

    Long walks, good sleep, decent food and one or two good friends would have done more for Richard Wallace, IMO, than anything else including ALICE.
  • If you visit iMortalportal.com [imortalportal.com], you can create a web-based alicebot with your own customized personality. There's a more flexible, though less aesthetically-refined interface to the same content available on Pandorabots.com [pandorabots.com].

    As an added bonus, these sites are powered by my favorite programming language - Lisp [lisp.org], specifically Allegro Common Lisp [franz.com].

    Look forward to the Oddcast [oddcast.com] powered bots in the near future (now available via Pandorabots' site)

  • Imagine a beowulf cluster of these....

    ...posting to Slashdot!

    ...as Anonymous Coward!

What is research but a blind date with knowledge? -- Will Harvey

Working...