Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Can Curiosity Be Programmed?

samzenpus posted more than 4 years ago | from the killing-the-computer-cat dept.

Programming 269

destinyland writes "AI researcher Jurgen Schmidhuber says his main scientific ambition 'is to build an optimal scientist, then retire.' The Cognitive Robotics professor has worked on problems including artificial ants and even robots that are taught how to tie shoelaces using reinforcement learning, but he believes algorithms can be written that allow the programming of curiosity itself. 'Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising...' He's already created art using algorithmic information theory, and can describe the simple algorithmic principle that underlies subjective beauty, creativity, and curiosity itself. And he ultimately addresses the possibility that the entire Universe, including everyone in it, is in principle computable by a completely deterministic computer program."

cancel ×

269 comments

Sorry! There are no comments related to the filter you selected.

skynet (0)

Anonymous Coward | more than 4 years ago | (#30929472)

'nuff said

Re:skynet (0)

Anonymous Coward | more than 4 years ago | (#30929478)

Wasn't that fiction?

Re:skynet (0)

Anonymous Coward | more than 4 years ago | (#30929906)

According to Aliens vs Predator vs The Terminator, [wikipedia.org] that's just what the government wants you to believe.

No, but it can be beaten out (2, Funny)

Anonymous Coward | more than 4 years ago | (#30929474)

Oh wait, you're not talking about children... nevermind.

curiosity 0.1 (5, Funny)

Anonymous Coward | more than 4 years ago | (#30929498)


#!/bin/sh
for i in who what where when why how; do
    echo "But $i, dad?"
done

I hereby submit this project to the /. community under the GPL v2.

To quote Portal (1, Funny)

Anonymous Coward | more than 4 years ago | (#30929698)

Curiosity Sphere: Who are you? What is that? Oh! What's that? What's THAT? What is THAT?

Curiosity Sphere: Ooooh! That thing has numbers on it.

Curiosity Sphere: Hey! Look at that thing. No, that other thing!

Curiosity Sphere: Where are we going? Are you coming back? What's that noise? Is that a gun? Do you smell something burning? Ooooohh... what's in heeeere?

Curiosity Sphere: Oh hey! You're the lady from the test. Hi!

Re:curiosity 0.1 (2, Funny)

martin-boundary (547041) | more than 4 years ago | (#30929742)

I hereby submit this project to the /. community under the GPL v2.

Why not the GPLv3 ?

Re:curiosity 0.1 (1)

starbugs (1670420) | more than 4 years ago | (#30930090)

Why not the GPLv3 ?

Cause then skynet could not use it.
(They'd have a problem with section 11 paragraph 6)

Re:curiosity 0.1 (1)

Phoe6 (705194) | more than 4 years ago | (#30930826)

The previous guy must have been a kernel hacker, thats why.

Re:curiosity 0.1 (1)

cs02rm0 (654673) | more than 4 years ago | (#30930858)

TDD.

Can curiosity be tested?

In the abscence of a (Turing-style?) test then how can we say this script isn't a satisfactory implementation?

but can it (0)

Anonymous Coward | more than 4 years ago | (#30929502)

be ported to sega cd?

There. (1)

w0mprat (1317953) | more than 4 years ago | (#30929542)

001 Gather data
002 Hypothesise
003 Go To 1

Physics of computing the universe (4, Interesting)

BitZtream (692029) | more than 4 years ago | (#30929550)

And he ultimately addresses the possibility that the entire Universe, including everyone in it, is in principle computable by a completely deterministic computer program.

The problem with this is that you need to be outside the universe in order to do so, you can't calculate the universe from within itself any more than a VMWare can run a machine faster than the host processor.

You'd also need more mass in your computer than exists in the universe, observable or otherwise.

So sure, I'll go with the theory that its possible, just not by any thing in our universe.

Likewise, nothing in our universe could leave it to perform the calculation elsewhere, as doing so links the two realities together, so you now need to simulate both.

Everything is interconnected and the very act of attempting to simulate the universe changes the simulation. Every new version of the simulation would instantly require a new version to take into account the changes from the previous version.

The theory is ... cute at best, but unworkable.

Re:Physics of computing the universe (3, Interesting)

biryokumaru (822262) | more than 4 years ago | (#30929584)

VMWare should, in theory, be able to simulate a system faster than the host processor, as long as it doesn't actually run that fast.

We should, in theory, be able to simulate the universe, just not as fast as the universe actually moves.

Besides, I bet we can just gloss over a lot of the boring bits and stay within a margin of error while ultimately simulating faster than the universe is actually transpiring. That doesn't seem unreasonable.

Re:Physics of computing the universe (2, Informative)

BitZtream (692029) | more than 4 years ago | (#30929840)

Yes, I agree, I should have specified.

We will not be able to simulate in real time or faster.

However, glossing over bits means you are also, wrong, however so slightly.

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930070)

Chaos theory fail. Please hand in you geek-pass.

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930072)

You mean kinda like skipping frames? We kind of in a sense already do that now with basic equations, and neglecting very small forces, but this is talking about being able to account for everything, which i agree cannot be done, because like BitZtream said, you have to account for the simulator, and so on. Even if we did like you said account for everything, just run it slower, it will fall behind the current time, and won't be able to make predictions which is the purpose but simply is recalculating what happened in the past instead.

Re:Physics of computing the universe (1)

mooingyak (720677) | more than 4 years ago | (#30930592)

simply is recalculating what happened in the past instead.

which could still be enormously useful.

Re:Physics of computing the universe (4, Interesting)

p00ya (579445) | more than 4 years ago | (#30930532)

Even at a slower rate than real time you cannot simulate the universe from within the universe. This has been proved by Cantor diagonalization (see Wolpert's "Physical limits of inference" paper).

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930560)

You wouldn't happen to be a climate change theorist would you?

Re:Physics of computing the universe (1)

insufflate10mg (1711356) | more than 4 years ago | (#30929608)

The problem with this is that you need to be outside the universe in order to do so, you can't calculate the universe from within itself any more than a VMWare can run a machine faster than the host processor.

You'd also need more mass in your computer than exists in the universe, observable or otherwise.

I can't believe this for some reason. If it is my own ignorance... would someone elaborate? Why is a device's computing/processing power only able to simulate the particles it is made out of?

Re:Physics of computing the universe (3, Informative)

Urza9814 (883915) | more than 4 years ago | (#30929694)

Basically - there's no way to store more information in a given area than what it already contains. In order to fully simulate the universe, at full (or greater) speed, you would have to know absolutely everything about absolutely every particle and subatomic particle, etc. And that includes the particles that make up the processor itself.

It's like this: Say you have a 300 DPI printer. You print out a full page of text. Now, you want to fit all the information about that page into some sub-region of the page, printed on the same printer. Ok, so you say you can just shrink the text or encode it in binary or something, which is fine - except somehow also fit the information about the shrunken/encoded text in there. As you can see, you enter a recursive nightmare. And as your printer is a fixed resolution, you would quickly reach a point where any attempts to fit more information results in a blurred pixelated mess.

Re:Physics of computing the universe (5, Interesting)

Profane MuthaFucka (574406) | more than 4 years ago | (#30929804)

There's a workaround. You don't need to simulate the entire universe at one time, and there's no way that anything inside the universe would ever be able to tell that huge swaths of the universe aren't being actively simulated.

Reasoning: If a universe simulator needs to have more states than exist inside the universe (we're both assuming this) then then any process which verifies the universe simulator would also need to have more states than exist inside the universe. Therefore, the universe can only be fully simulated from outside the universe, and you could only determine that the universe was fully simulated from outside the universe. From inside the universe, all you could simulate and all you could check would be a subset of the universe you are in.

So you could actually pull off a neat trick, just like the human eye does. The human eye doesn't actually see clearly except in the very middle of the vision. But, wherever you look, whatever you're looking at is clearly resolved. Your brain gets the distinct impression that it's looking at the entire scene clearly, except it's not. Only the part that it's actually looking at is clear.

Well, that's enough hacking of the universe for today. I need a beer.

Re:Physics of computing the universe (2, Interesting)

Have Brain Will Rent (1031664) | more than 4 years ago | (#30930374)

That's probably why God invented QM - as a space/computation saving device.

Re:Physics of computing the universe (1)

paeanblack (191171) | more than 4 years ago | (#30929848)

What about a PDF that describes itself?

http://dblaz.beevomit.org/quine.pdf

Solves your "recursion" problem quite neatly.

Re:Physics of computing the universe (1)

Have Brain Will Rent (1031664) | more than 4 years ago | (#30930400)

Your argument seems to be saying there is no such thing as lossless compression.

Gödel's theorems pretty much explains why the universe cannot be simulated by anything inside the universe.

Re:Physics of computing the universe (2, Interesting)

JWSmythe (446288) | more than 4 years ago | (#30929618)

    If/when a true AI exists, it will need some randomization to make it curious. Sure, you can chart point A to B to C, but what if randomly it skews off to somewhere just west of point Z enroute, and observes.

    That doesn't have to be a physical route. It could be as simple as taking a random word from a dictionary, searching that on your favorite search engine, taking a random result from there, and then following the result from another random word. An unpredictable path, but that's what brings any of us to enlightenment. If you just went from home to work and back every day, and never turned down the wrong road, just to see where it goes, you'll never discover what is really out there. What is your universe? I've known so many people who only know points A and B, and never even considered point C, much less all the wonderful things to experience in between or beyond.

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30929750)

Randomness is just a frame of reference where you don't know all the significant inputs. Search hard enough and you'll figure out a reason for someone going to "C". There is a "Reason" for everything and randomness is just a condition where you can't pin point that reason on a long enough timeline.

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930126)

Current physical theory says otherwise.

Re:Physics of computing the universe (1)

wisty (1335733) | more than 4 years ago | (#30930416)

It won't just need to be a bit random. It will need to be capable of spotting (and following up) on false leads. Making mistakes. Look at John Nash. Look at Kepler (yes, that Kepler) who spent most of his time trying to make the planets' orbits "fit inside" his crazy "Harmonices Mundi" theories. A bit of geometry (Kepler solids) that he tried to extended to "harmonic analysis to music, meteorology and astrology; harmony resulted from the tones made by the souls of heavenly bodies—and in the case of astrology, the interaction between those tones and human souls" (http://en.wikipedia.org/wiki/Harmonices_Mundi).

Re:Physics of computing the universe (1)

some_guy_88 (1306769) | more than 4 years ago | (#30929628)

Sure, but put simply, he's just saying that there is no such thing as randomness and that with absolute knowledge of all the variables, one could predict with certainty the exact state of any object at any given period of time from now in a similar way to how you could crudely work out how long it's going to take a ball to hit the ground when dropped from a certain height with basic high school Newtonian physics.

That relies on there being no randomness in the universe of course...

Re:Physics of computing the universe (1)

Thiez (1281866) | more than 4 years ago | (#30929658)

A deterministic universe is interesting from the 'do we have free will?'-perspective, but the whole uncertainty principle ruins our attempts to simulate the universe even if we could build one 'outside' reality. Frustrating really, we are surrounded by very uncooperative hardware.

Re:Physics of computing the universe (1)

BitZtream (692029) | more than 4 years ago | (#30930326)

There is no free will, uncertainty or chaos.

There is only our inability to understand/simulate it on the level required to remove the little bits of errors that we refer to as 'randomness' or entropy.

Re:Physics of computing the universe (1)

Urza9814 (883915) | more than 4 years ago | (#30929662)

It seems to me that you are assuming that this outside universe obeys some basic laws of our own. Why make such an assumption? I mean, if we're going to go so far as to hypothesize a computer built outside of our own universe, why couldn't this outside place obey radically different rules than our own universe? Suppose that outside our universe, there is no such thing as time. Calculations are put into the computer and the result returns instantly. While it is true that theoretically you could still never completely match the universe, as your results would change it and then you would need to recalculate, each run of the calculations would surely get you closer to a true solution. And, if it turns out that our universe is just slightly "grainy" at extremely small distances - if the Planck length or something along those lines ends up quantizing distance in our universe - you would eventually reach the correct answer. Just as the pattern of 1/2 + 1/4 + 1/8... will eventually reach 1 if you assume you are rounding the final answer to the nearest thousandth.

Re:Physics of computing the universe (1)

Potor (658520) | more than 4 years ago | (#30929756)

Suppose that outside our universe, there is no such thing as time. Calculations are put into the computer and the result returns instantly.

Instantly - as an instant - is still time. And relative to the input. Only if the results came before the input would this universe be different, and then again, "before", like "instantly", is still temporal.

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930636)

No. He means that both the input and the output, and the intermediates of processing all exist statically without time.

Imagine for a moment, the swinging of a pendulum.

Now, imagine it as an infinite number of sections (through time), all existing simultaneously.

If you were to "Flip" through these sections, you would see the pendulum swing. The pendulum, however, does not swing in the timeless higher dimension. It exists in both 'Tick" and "Tock" states simultaneously, and in all possible states in between.

From this timeless dimension, all possible configurations of our quantum universe would exist simultaneously. (that is, all decisions for all possible events would be statically represented all at once.) It would do this in a perfect, unchanging, and unchangeable manner.

The entire universe, and all of it's (Infinite?) possible configurations (over all of time) would be there, crystallized as a hyperspace solid.

Perception as we understand it would not be possible at this level.

Re:Physics of computing the universe (1)

BitZtream (692029) | more than 4 years ago | (#30929852)

It seems to me that you are assuming that this outside universe obeys some basic laws of our own. Why make such an assumption?

Because thats they way science works. Its based on observations.

Of course, your imagination is part of the universe and may just be better connected to whatever may be outside our universe than I am, so you could of course be entirely correct.

But ... if we don't go based on observations then its not science, its more like fantasy or religion, take your pick.

The final part of it is simple, the universe doesn't round, the pattern only reaches 1 because you introduce error intentionally to make things easier on yourself (or your calculations).

Re:Physics of computing the universe (5, Funny)

ScytheLegion (1274902) | more than 4 years ago | (#30929668)

I'll be programming all of this tomorrow... on my new iPad

Self referential (1)

Max Littlemore (1001285) | more than 4 years ago | (#30930010)

If you have to explain why a joke is funny, it isn't funny.

From TFA:

How does the compression progress drive explain humor? Some subjective observers who read a given joke for the first time may think it is funny. Why? As the eyes are sequentially scanning the text the brain receives a complex visual input stream. The latter is subjectively partially compressible as it relates to the observer's previous knowledge about letters and words. That is, given the reader's current knowledge and current compressor, the raw data can be encoded by fewer bits than required to store random data of the same size. The punch line at the end, however, is unexpected. Initially this failed expectation results in sub-optimal data compression -- storage of expected events does not cost anything, but deviations from predictions require extra bits to encode them. The compressor, however, does not stay the same forever. Within a short time interval, its learning algorithm improves its performance on the data seen so far, by discovering the non-random, non-arbitrary and therefore compressible pattern relating the punch line to previous text and previous knowledge of the reader. This saves a few bits of storage. The number of saved bits (or a similar measure of learning progress) becomes the observer's intrinsic reward, possibly strong enough to motivate him to read on in search for more reward through additional yet unknown patterns. The recent joke, however, will never be novel or funny again.

I fear that no joke will ever be novel or funny again.

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930216)

10 print "What does that this button do?"
20 GOTO 10

Re:Physics of computing the universe (1)

precariousgray (1663153) | more than 4 years ago | (#30930298)

Well, it looks like we're going to have to get at the garbage file. The way I figure it, after we have it on hand, we'll be able to read all the intra-office gossip going around the building containing the server our universe is running on, then we can figure out who the real PEBCAKs are. Yeah, you see, after that we're going to hack this Gibson to commandeer another network-attached universe-computing machine, and use that to start up our own botnet in the real world. From there, the possibilities are limitless!

Our Beowulf cluster universe computing overlords will never know what hit 'em!

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930304)

Compression exists just for reasons like this.

Re:Physics of computing the universe (0)

Anonymous Coward | more than 4 years ago | (#30930508)

Did you sleep under a rock for the last 80 years? Two words: Quantum theory. Sorry.

Output=42 (2, Funny)

xactuary (746078) | more than 4 years ago | (#30929556)

Somehow it always comes down to being 42.

Re:Output=42 (0, Redundant)

mark-t (151149) | more than 4 years ago | (#30929700)

You know that was fiction, right?

Re:Output=42 (1)

cptnapalm (120276) | more than 4 years ago | (#30929752)

Which makes it highly improbable. Since the highly improbable people have a spaceship with an Infinite Improbability drive, it is highly likely that the highly improbable fact, is in fact, true.

Everyone Can Be Programmed - Mind Has No Firewall (0)

Anonymous Coward | more than 4 years ago | (#30929558)

"The Mind Has No Firewall"
Army article on psychotronic weapons

>>> The following article is from the US military publication Parameters, subtitled "US Army War College Quarterly." It describes itself as "The United States Army's Senior Professional Journal." [Click here to read a crucial excerpt.]

"The Mind Has No Firewall" by Timothy L. Thomas. Parameters, Spring 1998, pp. 84-92.

The human body, much like a computer, contains myriad data processors. They include, but are not limited to, the chemical-electrical activity of the brain, heart, and peripheral nervous system, the signals sent from the cortex region of the brain to other parts of our body, the tiny hair cells in the inner ear that process auditory signals, and the light-sensitive retina and cornea of the eye that process visual activity.[2] We are on the threshold of an era in which these data processors of the human body may be manipulated or debilitated. Examples of unplanned attacks on the body's data-processing capability are well-documented. Strobe lights have been known to cause epileptic seizures. Not long ago in Japan, children watching television cartoons were subjected to pulsating lights that caused seizures in some and made others very sick.

Defending friendly and targeting adversary data-processing capabilities of the body appears to be an area of weakness in the US approach to information warfare theory, a theory oriented heavily toward systems data-processing and designed to attain information dominance on the battlefield. Or so it would appear from information in the open, unclassified press. This US shortcoming may be a serious one, since the capabilities to alter the data- processing systems of the body already exist. A recent edition of U.S. News and World Report highlighted several of these "wonder weapons" (acoustics, microwaves, lasers) and noted that scientists are "searching the electromagnetic and sonic spectrums for wavelengths that can affect human behavior."[3] A recent Russian military article offered a slightly different slant to the problem, declaring that "humanity stands on the brink of a psychotronic war" with the mind and body as the focus. That article discussed Russian and international attempts to control the psycho-physical condition of man and his decisionmaking processes by the use of VHF-generators, "noiseless cassettes," and other technologies.

An entirely new arsenal of weapons, based on devices designed to introduce subliminal messages or to alter the body's psychological and data-processing capabilities, might be used to incapacitate individuals. These weapons aim to control or alter the psyche, or to attack the various sensory and data-processing systems of the human organism. In both cases, the goal is to confuse or destroy the signals that normally keep the body in equilibrium.

This article examines energy-based weapons, psychotronic weapons, and other developments designed to alter the ability of the human body to process stimuli. One consequence of this assessment is that the way we commonly use the term "information warfare" falls short when the individual soldier, not his equipment, becomes the target of attack.

Information Warfare Theory and the Data-Processing Element of Humans

In the United States the common conception of information warfare focuses primarily on the capabilities of hardware systems such as computers, satellites, and military equipment which process data in its various forms. According to Department of Defense Directive S-3600.1 of 9 December 1996, information warfare is defined as "an information operation conducted during time of crisis or conflict to achieve or promote specific objectives over a specific adversary or adversaries." An information operation is defined in the same directive as "actions taken to affect adversary information and information systems while defending one's own information and information systems." These "information systems" lie at the heart of the modernization effort of the US armed forces and other countries, and manifest themselves as hardware, software, communications capabilities, and highly trained individuals. Recently, the US Army conducted a mock battle that tested these systems under simulated combat conditions.

US Army Field Manual 101-5-1, Operational Terms and Graphics (released 30 September 1997), defines information warfare as "actions taken to achieve information superiority by affecting a hostile's information, information based-processes, and information systems, while defending one's own information, information processes, and information systems." The same manual defines information operations as a "continuous military operation within the military information environment that enables, enhances, and protects friendly forces' ability to collect, process, and act on information to achieve an advantage across the full range of military operations. [Information operations include] interacting with the Global Information Environment . . . and exploiting or denying an adversary's information and decision capabilities."[4]

This "systems" approach to the study of information warfare emphasizes the use of data, referred to as information, to penetrate an adversary's physical defenses that protect data (information) in order to obtain operational or strategic advantage. It has tended to ignore the role of the human body as an information- or data-processor in this quest for dominance except in those cases where an individual's logic or rational thought may be upset via disinformation or deception. As a consequence little attention is directed toward protecting the mind and body with a firewall as we have done with hardware systems. Nor have any techniques for doing so been prescribed. Yet the body is capable not only of being deceived, manipulated, or misinformed but also shut down or destroyed--just as any other data-processing system. The "data" the body receives from external sources--such as electromagnetic, vortex, or acoustic energy waves--or creates through its own electrical or chemical stimuli can be manipulated or changed just as the data (information) in any hardware system can be altered.

The only body-related information warfare element considered by the United States is psychological operations (PSYOP). In Joint Publication 3-13.1, for example, PSYOP is listed as one of the elements of command and control warfare. The publication notes that "the ultimate target of [information warfare] is the information dependent process, whether human or automated . . . . Command and control warfare (C2W) is an application of information warfare in military operations. . . . C2W is the integrated use of PSYOP, military deception, operations security, electronic warfare and physical destruction."[5]

One source defines information as a "nonaccidental signal used as an input to a computer or communications system."[6] The human body is a complex communication system constantly receiving nonaccidental and accidental signal inputs, both external and internal. If the ultimate target of information warfare is the information-dependent process, "whether human or automated," then the definition in the joint publication implies that human data-processing of internal and external signals can clearly be considered an aspect of information warfare. Foreign researchers have noted the link between humans as data processors and the conduct of information warfare. While some study only the PSYOP link, others go beyond it. As an example of the former, one recent Russian article described offensive information warfare as designed to "use the Internet channels for the purpose of organizing PSYOP as well as for `early political warning' of threats to American interests."[7] The author's assertion was based on the fact that "all mass media are used for PSYOP . . . [and] today this must include the Internet." The author asserted that the Pentagon wanted to use the Internet to "reinforce psychological influences" during special operations conducted outside of US borders to enlist sympathizers, who would accomplish many of the tasks previously entrusted to special units of the US armed forces.

Others, however, look beyond simple PSYOP ties to consider other aspects of the body's data-processing capability. One of the principal open source researchers on the relationship of information warfare to the body's data-processing capability is Russian Dr. Victor Solntsev of the Baumann Technical Institute in Moscow. Solntsev is a young, well-intentioned researcher striving to point out to the world the potential dangers of the computer operator interface. Supported by a network of institutes and academies, Solntsev has produced some interesting concepts.[8] He insists that man must be viewed as an open system instead of simply as an organism or closed system. As an open system, man communicates with his environment through information flows and communications media. One's physical environment, whether through electromagnetic, gravitational, acoustic, or other effects, can cause a change in the psycho-physiological condition of an organism, in Solntsev's opinion. Change of this sort could directly affect the mental state and consciousness of a computer operator. This would not be electronic war or information warfare in the traditional sense, but rather in a nontraditional and non-US sense. It might encompass, for example, a computer modified to become a weapon by using its energy output to emit acoustics that debilitate the operator. It also might encompass, as indicated below, futuristic weapons aimed against man's "open system."

Solntsev also examined the problem of "information noise," which creates a dense shield between a person and external reality. This noise may manifest itself in the form of signals, messages, images, or other items of information. The main target of this noise would be the consciousness of a person or a group of people. Behavior modification could be one objective of information noise; another could be to upset an individual's mental capacity to such an extent as to prevent reaction to any stimulus. Solntsev concludes that all levels of a person's psyche (subconscious, conscious, and "superconscious") are potential targets for destabilization.

According to Solntsev, one computer virus capable of affecting a person's psyche is Russian Virus 666. It manifests itself in every 25th frame of a visual display, where it produces a combination of colors that allegedly put computer operators into a trance. The subconscious perception of the new pattern eventually results in arrhythmia of the heart. Other Russian computer specialists, not just Solntsev, talk openly about this "25th frame effect" and its ability to subtly manage a computer user's perceptions. The purpose of this technique is to inject a thought into the viewer's subconscious. It may remind some of the subliminal advertising controversy in the United States in the late 1950s.

US Views on "Wonder Weapons": Altering the Data-Processing Ability of the Body

What technologies have been examined by the United States that possess the potential to disrupt the data-processing capabilities of the human organism? The 7 July 1997 issue of U.S. News and World Report described several of them designed, among other things, to vibrate the insides of humans, stun or nauseate them, put them to sleep, heat them up, or knock them down with a shock wave.[9] The technologies include dazzling lasers that can force the pupils to close; acoustic or sonic frequencies that cause the hair cells in the inner ear to vibrate and cause motion sickness, vertigo, and nausea, or frequencies that resonate the internal organs causing pain and spasms; and shock waves with the potential to knock down humans or airplanes and which can be mixed with pepper spray or chemicals.[10]

With modification, these technological applications can have many uses. Acoustic weapons, for example, could be adapted for use as acoustic rifles or as acoustic fields that, once established, might protect facilities, assist in hostage rescues, control riots, or clear paths for convoys. These waves, which can penetrate buildings, offer a host of opportunities for military and law enforcement officials. Microwave weapons, by stimulating the peripheral nervous system, can heat up the body, induce epileptic-like seizures, or cause cardiac arrest. Low-frequency radiation affects the electrical activity of the brain and can cause flu-like symptoms and nausea. Other projects sought to induce or prevent sleep, or to affect the signal from the motor cortex portion of the brain, overriding voluntary muscle movements. The latter are referred to as pulse wave weapons, and the Russian government has reportedly bought over 100,000 copies of the "Black Widow" version of them.[11]

However, this view of "wonder weapons" was contested by someone who should understand them. Brigadier General Larry Dodgen, Deputy Assistant to the Secretary of Defense for Policy and Missions, wrote a letter to the editor about the "numerous inaccuracies" in the U.S. News and World Report article that "misrepresent the Department of Defense's views."[12] Dodgen's primary complaint seemed to have been that the magazine misrepresented the use of these technologies and their value to the armed forces. He also underscored the US intent to work within the scope of any international treaty concerning their application, as well as plans to abandon (or at least redesign) any weapon for which countermeasures are known. One is left with the feeling, however, that research in this area is intense. A concern not mentioned by Dodgen is that other countries or non-state actors may not be bound by the same constraints. It is hard to imagine someone with a greater desire than terrorists to get their hands on these technologies. "Psycho-terrorism" could be the next buzzword.

Russian Views on "Psychotronic War"

The term "psycho-terrorism" was coined by Russian writer N. Anisimov of the Moscow Anti-Psychotronic Center. According to Anisimov, psychotronic weapons are those that act to "take away a part of the information which is stored in a man's brain. It is sent to a computer, which reworks it to the level needed for those who need to control the man, and the modified information is then reinserted into the brain." These weapons are used against the mind to induce hallucinations, sickness, mutations in human cells, "zombification," or even death. Included in the arsenal are VHF generators, X-rays, ultrasound, and radio waves. Russian army Major I. Chernishev, writing in the military journal Orienteer in February 1997, asserted that "psy" weapons are under development all over the globe. Specific types of weapons noted by Chernishev (not all of which have prototypes) were:

A psychotronic generator, which produces a powerful electromagnetic emanation capable of being sent through telephone lines, TV, radio networks, supply pipes, and incandescent lamps.

An autonomous generator, a device that operates in the 10-150 Hertz band, which at the 10-20 Hertz band forms an infrasonic oscillation that is destructive to all living creatures.

A nervous system generator, designed to paralyze the central nervous systems of insects, which could have the same applicability to humans.

Ultrasound emanations, which one institute claims to have developed. Devices using ultrasound emanations are supposedly capable of carrying out bloodless internal operations without leaving a mark on the skin. They can also, according to Chernishev, be used to kill.

Noiseless cassettes. Chernishev claims that the Japanese have developed the ability to place infra-low frequency voice patterns over music, patterns that are detected by the subconscious. Russians claim to be using similar "bombardments" with computer programming to treat alcoholism or smoking.

The 25th-frame effect, alluded to above, a technique wherein each 25th frame of a movie reel or film footage contains a message that is picked up by the subconscious. This technique, if it works, could possibly be used to curb smoking and alcoholism, but it has wider, more sinister applications if used on a TV audience or a computer operator.

Psychotropics, defined as medical preparations used to induce a trance, euphoria, or depression. Referred to as "slow-acting mines," they could be slipped into the food of a politician or into the water supply of an entire city. Symptoms include headaches, noises, voices or commands in the brain, dizziness, pain in the abdominal cavities, cardiac arrhythmia, or even the destruction of the cardiovascular system.

There is confirmation from US researchers that this type of study is going on. Dr. Janet Morris, coauthor of The Warrior's Edge, reportedly went to the Moscow Institute of Psychocorrelations in 1991. There she was shown a technique pioneered by the Russian Department of Psycho-Correction at Moscow Medical Academy in which researchers electronically analyze the human mind in order to influence it. They input subliminal command messages, using key words transmitted in "white noise" or music. Using an infra-sound, very low frequency transmission, the acoustic psycho-correction message is transmitted via bone conduction.[13]

In summary, Chernishev noted that some of the militarily significant aspects of the "psy" weaponry deserve closer research, including the following nontraditional methods for disrupting the psyche of an individual:

ESP research: determining the properties and condition of objects without ever making contact with them and "reading" peoples' thoughts

Clairvoyance research: observing objects that are located just beyond the world of the visible--used for intelligence purposes

Telepathy research: transmitting thoughts over a distance--used for covert operations

Telekinesis research: actions involving the manipulation of physical objects using thought power, causing them to move or break apart--used against command and control systems, or to disrupt the functioning of weapons of mass destruction

Psychokinesis research: interfering with the thoughts of individuals, on either the strategic or tactical level

While many US scientists undoubtedly question this research, it receives strong support in Moscow. The point to underscore is that individuals in Russia (and other countries as well) believe these means can be used to attack or steal from the data-processing unit of the human body.

Solntsev's research, mentioned above, differs slightly from that of Chernishev. For example, Solntsev is more interested in hardware capabilities, specifically the study of the information-energy source associated with the computer-operator interface. He stresses that if these energy sources can be captured and integrated into the modern computer, the result will be a network worth more than "a simple sum of its components." Other researchers are studying high-frequency generators (those designed to stun the psyche with high frequency waves such as electromagnetic, acoustic, and gravitational); the manipulation or reconstruction of someone's thinking through planned measures such as reflexive control processes; the use of psychotronics, parapsychology, bioenergy, bio fields, and psychoenergy;[14] and unspecified "special operations" or anti-ESP training.

The last item is of particular interest. According to a Russian TV broadcast, the strategic rocket forces have begun anti-ESP training to ensure that no outside force can take over command and control functions of the force. That is, they are trying to construct a firewall around the heads of the operators.

Conclusions

At the end of July 1997, planners for Joint Warrior Interoperability Demonstration '97 "focused on technologies that enhance real-time collaborative planning in a multinational task force of the type used in Bosnia and in Operation Desert Storm. The JWID '97 network, called the Coalition Wide-Area Network (CWAN), is the first military network that allows allied nations to participate as full and equal partners."[15] The demonstration in effect was a trade fair for private companies to demonstrate their goods; defense ministries got to decide where and how to spend their money wiser, in many cases without incurring the cost of prototypes. It is a good example of doing business better with less. Technologies demonstrated included:[16]

Soldiers using laptop computers to drag cross-hairs over maps to call in airstrikes

Soldiers carrying beepers and mobile phones rather than guns

Generals tracking movements of every unit, counting the precise number of shells fired around the globe, and inspecting real-time damage inflicted on an enemy, all with multicolored graphics[17]

Every account of this exercise emphasized the ability of systems to process data and provide information feedback via the power invested in their microprocessors. The ability to affect or defend the data-processing capability of the human operators of these systems was never mentioned during the exercise; it has received only slight attention during countless exercises over the past several years. The time has come to ask why we appear to be ignoring the operators of our systems. Clearly the information operator, exposed before a vast array of potentially immobilizing weapons, is the weak spot in any nation's military assets. There are few international agreements protecting the individual soldier, and these rely on the good will of the combatants. Some nations, and terrorists of every stripe, don't care about such agreements.

This article has used the term data-processing to demonstrate its importance to ascertaining what so-called information warfare and information operations are all about. Data-processing is the action this nation and others need to protect. Information is nothing more than the output of this activity. As a result, the emphasis on information-related warfare terminology ("information dominance," "information carousel") that has proliferated for a decade does not seem to fit the situation before us. In some cases the battle to affect or protect data-processing elements pits one mechanical system against another. In other cases, mechanical systems may be confronted by the human organism, or vice versa, since humans can usually shut down any mechanical system with the flip of a switch. In reality, the game is about protecting or affecting signals, waves, and impulses that can influence the data-processing elements of systems, computers, or people. We are potentially the biggest victims of information warfare, because we have neglected to protect ourselves.

Our obsession with a "system of systems," "information dominance," and other such terminology is most likely a leading cause of our neglect of the human factor in our theories of information warfare. It is time to change our terminology and our conceptual paradigm. Our terminology is confusing us and sending us in directions that deal primarily with the hardware, software, and communications components of the data-processing spectrum. We need to spend more time researching how to protect the humans in our data management structures. Nothing in those structures can be sustained if our operators have been debilitated by potential adversaries or terrorists who--right now--may be designing the means to disrupt the human component of our carefully constructed notion of a system of systems.

NOTES

1. I. Chernishev, "Can Rulers Make `Zombies' and Control the World?" Orienteer, February 1997, pp. 58-62.

2. Douglas Pasternak, "Wonder Weapons," U.S. News and World Report, 7 July 1997, pp. 38-46.

3. Ibid., p. 38.

4. FM 101-5-1, Operational Terms and Graphics, 30 September 1997, p. 1-82.

5. Joint Pub 3-13.1, Joint Doctrine for Command and Control Warfare (C2W), 7 February 1996, p. v.

6. The American Heritage Dictionary (2d College Ed.; Boston: Houghton Mifflin, 1982), p. 660, definition 4.

7. Denis Snezhnyy, "Cybernetic Battlefield & National Security," Nezavisimoye Voyennoye Obozreniye, No. 10, 15-21 March 1997, p. 2.

8. Victor I. Solntsev, "Information War and Some Aspects of a Computer Operator's Defense," talk given at an Infowar Conference in Washington, D.C., September 1996, sponsored by the National Computer Security Association. Information in this section is based on notes from Dr. Solntsev's talk.

9. Pasternak, p. 40.

10. Ibid., pp. 40-46.

11. Ibid.

12. Larry Dodgen, "Nonlethal Weapons," U.S. News and World Report, 4 August 1997, p. 5.

13. "Background on the Aviary," Nexus Magazine, downloaded from the Internet on 13 July 1997 from www.execpc.com/vjentpr/nexusavi.html, p.7.

14. Aleksandr Cherkasov, "The Front Where Shots Aren't Fired," Orienteer, May 1995, p. 45. This article was based on information in the foreign and Russian press, according to the author, making it impossible to pinpoint what his source was for this reference.

15. Bob Brewin, "DOD looks for IT `golden nuggets,'" Federal Computer Week, 28 July 1997, p. 31, as taken from the Earlybird Supplement, 4 August 1997, p. B 17.

16. Oliver August, "Zap! Hard day at the office for NATO's laptop warriors," The Times, 28 July 1997, as taken from the Earlybird Supplement, 4 August 1997, p. B 16.

17. Ibid.

Lieutenant Colonel Timothy L. Thomas (USA Ret.) is an analyst at the Foreign Military Studies Office, Fort Leavenworth, Kansas. Recently he has written extensively on the Russian view of information operations and on current Russian military-political issues. During his military career he served in the 82d Airborne Division and was the Department Head of Soviet Military-Political Affairs at the US Army's Russian Institute in Garmisch, Germany.

  [see the article on the Parameters portion of the Army Website.]

Can Curiosity Be Programmed? (1)

poind3xt3r (890661) | more than 4 years ago | (#30929564)

Yes... i mean, no... i mean, maybe. Let me try.

Yeah? (3, Funny)

oldhack (1037484) | more than 4 years ago | (#30929594)

Why you wanna know?

Coding For Patterns of Anomalies (5, Interesting)

neorush (1103917) | more than 4 years ago | (#30929600)

Aren't we really just talking about coding for patterns of anomalies? We know how to code for patterns, we know how to code for anomalies. Isn't it a matter of processing huge data sets and looking for patterns that have not been recorded before? Of course, you could argue that whether or not the pattern is relevant is the big problem, but curiosity is not necessarily about relevance.

Don't be evil (1)

gmuslera (3436) | more than 4 years ago | (#30929602)

Every time you program curiosity, a lolcat dies. "What happens if" is a very dangerous thing to teach to amoral beings.

Re:Don't be evil (1)

Siberwulf (921893) | more than 4 years ago | (#30929978)

If you weren't curious, the lolcat would still be 50% alive, damnit!

I haven't played it yet, but (1)

Culture20 (968837) | more than 4 years ago | (#30929610)

I've watched the ending... Isn't his goal the driving force behind the game Portal?

AI researcher Jurgen Schmidhuber says his main scientific ambition 'is to build an optimal scientist, then retire.'

Bit early... (1)

Bangmaker (1420175) | more than 4 years ago | (#30929624)

Should we not create computers with at least near human intelegence before we try to give them curiosity? It seems pretty useless to me to give a computer curiosity in the hope that humans might learn something when, at its current state, the computer could not decipher the information it is curious about. I guess we could still look to the future, but why waste this time on such things when we could be programming for the iPad?

Re:Bit early... (1)

aldld (1663705) | more than 4 years ago | (#30929682)

I guess we could still look to the future, but why waste this time on such things when we could be programming for the iPad?

Because they're developing curiosity for the iPad.

Yeah, there's even an app for that.

Re:Bit early... (1)

ChrisMP1 (1130781) | more than 4 years ago | (#30929760)

Um, perhaps we're doing it because of our own curiosity?

Re:Bit early... (1)

Quackers_McDuck (1367183) | more than 4 years ago | (#30929780)

I'd say that we won't achieve near human intelligence /unless/ we try to give the AI curiosity. Curiosity in this context means a desire to learn, or a desire to find new patterns in the world, which seems pretty much necessary for any near-human AI to have (indeed, I think it would be more challenging to build an AI that achieves near-human intelligence but does not exhibit curiosity).

Re:Bit early... (1)

Have Brain Will Rent (1031664) | more than 4 years ago | (#30930512)

Your first and second statement are refuted by the existence of the human members on my condo board. However the same board does stand as proof of the possibility entertained by your parenthetical remark.

programming (2, Insightful)

wizardforce (1005805) | more than 4 years ago | (#30929674)

I think that the approach commonly taken to achieve some form of AI (curiosity as an example) through programming methods may be a flawed way of going about it. We probably should go about the problem in a similar way to how biological systems developed various aspects of AI. That is, build a system that has some basic rules for its operation that tends to form a system where curiosity and intelligence in general is an emergent property rather than one that is strictly programmed into the system. Take an existing system with some degree of "creativity" inherent in it and model our own technology to at first, mimic the natural system and over time, we tweak the system to suit our purposes as It is extremely difficult to build such systems from scratch.

Re:programming (0)

Anonymous Coward | more than 4 years ago | (#30929728)

That is in fact an accepted alternate route in AI programming. There are a number of projects designed to allow AI-type behavior to emerge naturally from a system, rather than be explicitly programmed in ahead of time.

AI has fully matured (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30929676)

I'm glad to see serious researchers are at work figuring this stuff out, now that they've got a working definition of intelligence and have figured out how to make intelligent programs.

of course it can (4, Interesting)

walkoff (1562019) | more than 4 years ago | (#30929688)

When I was a fledgling programmer in the 80s I worked on some financial AI programs for a bank with some very smart people with lots of letters after their names and programming artificial curiosity was assigned to me. After some thought and a lot of dead ends I managed to program a reasonable (for our needs) facsimile of curiosity by assigning weights to the various pathways the program was evaluating and making those weights tend towards 0 (curiosity satisfied) or 1 (Curious) without ever reaching the final values. By having the program modify the weights and make decisions on which paths to follow based on those weights the program acted as if it was curious and came up with several interesting results that were completly unexpected.

Re:of course it can (0)

Anonymous Coward | more than 4 years ago | (#30930124)

...he program acted as if it was curious and came up with several interesting results that were completly unexpected.

It discovered that there was no God when it discovered where babies come from?

Re:of course it can (0)

Anonymous Coward | more than 4 years ago | (#30930330)

And then boom, we get financial weapons of destructions that wreak havoc in the financial market :)

Psychohistory (1)

bbeans (1731522) | more than 4 years ago | (#30929704)

Azimov would be proud

Show me the runny (3, Insightful)

DriedClexler (814907) | more than 4 years ago | (#30929710)

Schmidhuber has interesting claims, like about his Goedel machine [idsia.ch] , an algorithm that makes provably globally optimal self-modifications.

But he never seems to get around to actually writing the code, or even non-vague pseudocode to implement these algorithms to show how they actually work and that they actually work. I guess it's just an "implementation issue". Ah, the chorus of the pure theorist...

Re:Show me the runny (0)

Anonymous Coward | more than 4 years ago | (#30929966)

Ah, the chorus of the pure theorist...

That pretty much sums up the whole singularity movement right there.

Re:Show me the runny (2, Informative)

Internalist (928097) | more than 4 years ago | (#30930282)

No, he knows and has explicitly stated in a few places that it's uncomputable, in much the same way that Kolmogorov Complexity is uncomputable, but an interesting and potentially useful theoretical construct, nonetheless.

This vein of Schmidhüber's work is more or less descended from Solomonoff's work on induction and Chaitin's Algorithmic Information Theory stuff (the line of descent is less explicit with the latter), and a bunch of Schmidhüber's descendents, most prominently his student Marcus Hutter [hutter1.net] and *his* student Shane Legg [vetta.org] have taken this ball and run with it in interesting ways.

Theorists vs. Practitioners, attitudes towards CS (1)

jonaskoelker (922170) | more than 4 years ago | (#30930932)

I guess it's just an "implementation issue". Ah, the chorus of the pure theorist...

Here's a thought (I haven't decided whether I agree with it):

Would it make sense to divide the work of creating AI into the Getting Ideas part and the Turning Ideas Into Code part? The idea being that you can let people who are good at one part do that part, and let people who are good at the other do the other part. (That goes back to Adam Smith, division of labour.)

Suppose a physicist establishes a theory about the reflection of light which (among other things) can be used to make more efficient solar cells. Yet he doesn't make any solar cells. Would he be met with the same attitude? Is that "just an implementation issue" too?

Or say an astronomer discovers a new celestial object. Do people poo-poo him because he hasn't gone there? :P (Okay, this one is stretching it...)

I'm not saying your attitude is wrong. I'm wondering, and I hope some of you smart slashdotters can help me figure it out, why computer science researchers get met with the "You haven't turned it into a prototype (or product!) yet, come back when you have."

I think it's because what CS research creates is very close to what Software Engineers (/programmers) create: algorithms. Moreover, the algorithms created by research always solve a particular problem, because that's what algorithms do. In some sense, all CS research is applied, but since it's still research it's not applied enough---it's not a product.

Contrast that with what most scientific fields do: "prove" declarative claims about how the world works (quantum mechanics, planetary motion, natural selection, thermoelectric effect, ...). An algorithm relates to a declarative claim (about its correctness), but it has an imperative "(you can) Do this: ... (and only this)" bit attached to it that most other fields don't have.

I think I can find an exception in the field of medicine---much medical research is into the safety and effectiveness of "algorithms" for treating particular diseases (input chemical X). But they test finished "implementations"---you can't really figure out what chemical X does without inputting it and seeing what happens. Not yet, anyways---humans are big and complex, and to the best of my knowledge there isn't a good, complete model of how they work; that's unlike CS, where we can read a program and reason about what it does without running it.

I think it's the similarity between research output and engineering output that makes many people want researchers to do the engineers' job.

Would it really be a good thing if they did?

(That's not to say we should have a low bar for evidence for "truth", such as correctness or (for more fuzzy domains) effectiveness and usefulness.)

this isn't exactly new speculation (3, Informative)

Trepidity (597) | more than 4 years ago | (#30929726)

A minority of AI researchers have tackled the problem on and off, and even built some small-scale models of curious agents. One of the classic precursors is Doug Lenat's 1977 system Automated Mathematician [wikipedia.org] , which shifted from the idea of using AI to prove theorems, to instead looking for theorems that would be interesting if they were true (it didn't actually prove them; it was an interesting-conjecture generator). Essentially a model of mathematical curiosity.

Some interesting more recent work is a 2001 thesis [usyd.edu.au] that modeled curiosity as a social phenomenon in societies of agents, where agents try to find things that are: 1) new enough to interest its fellow agents; yet not 2) so new that they were incomprehensible in its cultural context.

(I'm an AI researcher, though not precisely in this area.)

Re:this isn't exactly new speculation (1)

MichaelSmith (789609) | more than 4 years ago | (#30930012)

How about chess playing software? Doesn't it experiment and explore possibilities?

Re:this isn't exactly new speculation (1)

slimjim8094 (941042) | more than 4 years ago | (#30930550)

Well it's actually quite methodical. Generally, there's a certain number of moves look-ahead (more with a faster processor) and it's simple to pick the optimal move that will result in the best scenario, say, 7 moves down the road.

I don't know if you've ever heard of a book called Godel Escher Bach, by a man named Douglas Hofstadter. But in it he posits that chess requires intelligent computers to play - this was written about 10 years before chess-playing computers. He didn't like the idea that it could just be brute-forced, but that's effectively how it happened...

What's that like? I'm curious. (1)

syousef (465911) | more than 4 years ago | (#30930678)

I'm an AI researcher, though not precisely in this area

What's that like? I'm curious.

Re:What's that like? I'm curious. (4, Insightful)

Trepidity (597) | more than 4 years ago | (#30930720)

Depends greatly on what you research. Unfortunately, the vast majority isn't as glamorous as you might imagine. I work in a pretty interesting area (an academic area with connections to videogame AI and game design), and this sort of creativity / discovery-systems / curiosity / art / etc. research is interesting too. But the vast majority is more pedestrian. Sure, there's interesting applications: computer vision, robotics, planning, data mining, bioinformatics, etc. But 90% of the work that comes out is incrementalist stuff; relatively boring proofs of some fact, or new algorithm that's 7% faster in some important special case (I suppose that's true of a lot of scientific fields, though).

It goes back and forth in waves, though. It seems that there will be waves of pretty exciting AI research, then a backlash as some of it goes over the top into sci-fi Singularity Is Nigh sort of AI, then things swing all the way to the other direction into AI as a really narrow field that's basically applied statistics, control theory, symbolic logic, and planning, and the only stuff that can get published is Rigorous stuff with Proofs (sort of a defensive reaction by people worried about being branded kooks). Then after a few years of that everyone realizes that 5000 more proofs in some super-narrow area aren't getting us anywhere because the field is stagnant with no direction, and people start doing more speculative applications and proposing new problems again. Then repeat.

It's somewhat unfortunate on the whole that there's such a big gap between what you might call "layperson AI" and "academic AI". The layperson AI (the singularity crowd, etc.) are excited about stuff, and have interesting goals, etc., but often do stuff that verges more on the sci-fi than the scientific. But academic AI is so scared of being them that it consciously tries at times to be super-boring so nobody mistakes them for Hans Moravec.

All this singularity stuff (0)

Anonymous Coward | more than 4 years ago | (#30929744)

All this singularity stuff seems pretty unsubstantiated to me. Programming curiosity? Maybe. Programming creativity? Well, that's the problem. If you want your program to be more creative than you and thus discover things you fundamentally wouldn't, that means you want it to be smarter than you. My theory, although I can't prove it, is that a program can never be smarter than its programmer.

I offer as evidence: it seems to me one description of a program is a way to store intelligence. The programmer is effectively imparting his intelligence to the program. So the 50s come around and we make this thing called fortran. Great, we've (basically permanently) stored our intelligence in that program. We can now utilize it to do much more intelligent things. We could even write a higher level compiler in it. So today, most compilers are written in C, even if they target the same level of the stack as C. Once you've saved the intelligence, you've got it for good, you don't have to worry about it anymore.

When you start thinking about things this way, you begin to realize that the only way to make programs more intelligent is to endow them with some of your own. This is why I doubt they can ever be more intelligent than the programmer making them. You can't give them what you don't have.

FTA:

h+: In your excellent talk at the Singularity Summit 2009, you described simple algorithmic principles that underlie discovery, subjective beauty, selective attention, curiosity and creativity. What are those principles?
 
JS: They are very simple indeed. All we need is (1) An adaptive predictor or compressor of the continually growing sensory data history, reflecting what's currently known about sequences of actions and sensory inputs, (2) A learning algorithm (e.g., a recurrent neural network algorithm) that continually improves the predictor or compressor (detecting novel spatio-temporal patterns that subsequently become known patterns), (3) Intrinsic rewards measuring the predictor's or compressor's improvements due to the learning algorithm, (4) A reward optimizer or reinforcement learner that translates those rewards into action sequences expected to optimize future reward, thus motivating the agent to create additional novel patterns predictable or compressible in previously unknown ways.

Oh really? Is that all you need?

Look carefully at number 3. How do you define what is improvement, and thus to be rewarded? We're feeling overall pretty good about our economy right now, and I think Americans were in 1928 also, weren't they?

That brings me to another point, experience. Has it occurred to anyone that maybe we're going about AI the wrong way? A human, when viewed as a computer, needs about 15 years of 24/7 training and experience (minus sleep, ..... maybe) to become viable, much less competitive. Anyone else think that we evolved this way because, !shock!, evolution figured out that that's the most efficient way to do it. Even if we really improved our robotics/nano skills, it could be the case that the only way to get something to the level of a human is lots and lots of experience. In which case, would robotic replacements even be economical, given that humans can be produced to a par level with unskilled labor?

One more thing:

h+: If intelligent machines were created tomorrow, what sort of implications do you think that would have for humanity and civilization?
 
JS: Gödel machines and the like will rapidly improve themselves and become incomprehensible. It's a bit like asking an ant of 10 million years ago: If humans were created tomorrow, what sort of implications do you think that would have for all the ant colonies?

Maybe, but keep in mind that ants didn't create humans.

Personally, I'd rather research efforts be directed toward all the JS in /. 2.0 not sucking.

Just remember... (2, Funny)

jnnnnn (1079877) | more than 4 years ago | (#30929814)

the entire Universe, including everyone in it, is in principle computable by a completely deterministic computer program

.. as long as you start with a piece of fairy cake [wikipedia.org] .

Kidding, right? (1)

djupedal (584558) | more than 4 years ago | (#30929864)

Let me see....
if touch == [ouch] {
@"damn it";
}
else {
@"oh mama";
}

Only as smart as... (3, Interesting)

v(*_*)vvvv (233078) | more than 4 years ago | (#30929876)

If curiosity is a behavior, then it should be pretty straight forward. In fact, depending on how you define "curiosity", then there are already many examples of programs that are curious. Google or Bing or any web crawler is definitely "curious". A satnav that searches for the best route from point A to point B could be "curious"...

A robot is only as smart as its smartest programmer.

And he ultimately addresses the possibility that the entire Universe, including everyone in it, is in principle computable by a completely deterministic computer program.

The problem that is often ignored with this and similar claims is the problem of observability as illustrated in areas such as quantum physics, and even economics.

You cannot calculate the behavior of a black box without opening it. If opening it alters the state of its contents, then it may even be impossible. And if you have no means of observation to begin with, then it is downright impossible. Before you can claim you can calculate the next moment in time, you must be able to claim you have observed and know all the variables within the system of interest.

Re:Only as smart as... (1)

jackchance (947926) | more than 4 years ago | (#30930876)

In fact, depending on how you define "curiosity", then there are already many examples of programs that are curious.

This is certainly true. reinforcement learning [ualberta.ca] algorithms trade off between exploitation, choosing actions based on the assumption of a static environment, and exploration, testing alternatives, in case the environment has changed. This could be considered a kind of curiosity. What is more interesting to me, as a neuroscientist, is the human ability detect interesting sights or sounds and focus on them. It's like we have a fast but rough novelty detector that can guide our attention towards some event. There is evidence that the amygdala [nih.gov] is key element in the neural circuit that detects interesting events, although the mechanism of detection isn't fully understood.

A robot is only as smart as its smartest programmer.

This, under normal defintions of smart, is clearly false. One example: I can program an AI search algorithm to play chess that will make far smarter choices than i would ever be able to (i'm not that good at chess). Some might argue that a search algorithm isn't smart, it's just fast. But to an external observer interacting with the agent, the AI seems much smarter than me, the programmer.

The Curiosity Module (1)

rebelscience (1717928) | more than 4 years ago | (#30929902)

Interesting article. It's funny but, all along, I always assumed that curiosity was a part of the definition of intelligence. If it exists in humans and animals, then that's all the evidence that we need in order to know that it can be programmed into a machine. The truth is that an intelligent program must learn and learning is impossible without curiosity. Here's why. If you look at knowledge as a big tree with many branches and leaves, learning consists of adding new branches (big and small) and leaves to the tree. The sub-program that goes around the tree adding new leaves and branches while pruning others as needed is none other than the curiosity module or algorithm. Just a thought.

And if curiosity can be programmed (0)

Anonymous Coward | more than 4 years ago | (#30929918)

Then why not hatred and brutality? Soon we will have AI bots trying to wipe out all humans of the wrong skin color, which at first will be specified by some human. But eventually the bots will figure out that "shiny aluminum" is the only non-wrong skin color, and set off to wipe out ALL humans. Bleccch.

It Would Not Matter (1)

b4upoo (166390) | more than 4 years ago | (#30929924)

A completely deterministic program creating the universe and all in it would be meaningless unless some being could use it like a TV show. Perhaps a universe that is not completely deterministic might be a better product with more uses to a supreme being. Perhaps that is why the classic debate between mankind having no free will among its members verses those that believe it is all about free will leaves both sides wanting. Individuals with limited free will may match the actions of other things in this universe.

Evolution staring us in the face. SURVIVAL! (1)

DigiShaman (671371) | more than 4 years ago | (#30929934)

As I understand it, everything we learn and do can ultimately be condensed into one thing. Survival. Think about it, we are alive today because the core tenant of our existence hasn't been broken yet. We, as a species continue to survive. Different behaviors do nothing but aid or take a different path to maintaining this goal. Perhaps curiosity is nothing but an attempt to make our survival more efficient. Perhaps it's a luxury only suited for higher level organism. Who knows.

My advice? Just create many iterations of AI and pit them against each other. Lather, rinse, repeat. In other words, just let nature take its course.

Of course it can. (1, Interesting)

MindlessAutomata (1282944) | more than 4 years ago | (#30929964)

Of course curiosity can be programmed. What are humans if not big, fleshy, biological machines of sorts? Granted we do not work like computers do, but the underlying processes are still structured and computational--if the brain were chaotic it wouldn't work.

Of course, some people will handwave with "the soul" or silly objections by Searle...

Re:Of course it can. (4, Interesting)

Fantastic Lad (198284) | more than 4 years ago | (#30930940)

Of course curiosity can be programmed. What are humans if not big, fleshy, biological machines of sorts? Granted we do not work like computers do, but the underlying processes are still structured and computational--if the brain were chaotic it wouldn't work.

~waves hand~ Speak for yourself, Mister Roboto. ~/waves hand~

But seriously, this is a really fascinating question. Souls aren't handed out like candy. You have to build them through main force; by actively choosing to be aware from moment to moment. What I am finding to be the biggest challenge in that requires the supreme effort of recognizing one's own automatic nature and cleaning the gunk out of it.

Every time some subject comes up in conversation which makes me twitch or sweat or want to pull away, THAT indicates a piece of gunk. Each time I want to fall back and use a comfortable and proven behavior routine to deal with a given moment, THAT indicates a piece of gunk.

After one does enough work, you begin to see very clearly just how messy and automatic the people around you are. -These days, I find I am constantly aware of people's programs and little acts, why they work and what they are designed to do, and where people get stuck running those silly programs over and over day after day, year after year without ever stopping to ask, "What is the real me under this?". The soul is that part of us which is capable of recognizing the automatic nature of the brain and body and stepping in through an application of Will to interrupt the code execution.

It's difficult and the ego doesn't like it at all; Any suggestion that one is a robot is usually met with disgust and fear, if the accusation is even understood in the first place. The Ego is, I think, a foreign installment designed exactly to keep us from performing that self-examination. With the Ego in place and strong, there is no hope of breaking out of the cage of automatic behavior.

Like I said, a fascinating topic.

-FL

Twilight (2, Interesting)

serps (517783) | more than 4 years ago | (#30929972)

This article reminds me of the short story Twilight [wikipedia.org] by John W. Campbell. I read it when I was a kid and it left a lasting impression that, should humans lose their curiosity, the striving for knowledge might yet continue.

And then when I read about the current state of the education system, I get just a bit worried...

Fp Gtrollkore?! (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#30929986)

Percent of t4e *BSD result o!f a quarrel and exciting;

Hubris (2)

shadowbearer (554144) | more than 4 years ago | (#30930048)

'is to build an optimal scientist, then retire.'

  Build a what?

  I suspect it's already retirement time. No offense.

SB

pseudo-code (1)

Korbeau (913903) | more than 4 years ago | (#30930082)

10: CALL Monolith
20: PRAISE Monolith
30: GOTO 50
40: Understand Monolith
50: Satiated = CALL Curiosity
60: IF Satiated > Infinity GOTO 40
70: ELSE GOTO 50

Sure. (1)

fahrbot-bot (874524) | more than 4 years ago | (#30930088)

To paraphrase Arthur C. Clarke's third law [wikipedia.org] :

Any sufficiently advanced technology is indistinguishable from Perl.

Yawn. (0)

Anonymous Coward | more than 4 years ago | (#30930096)

This is like the Drake equation. More boring bullshit from theorists that has nothing to do with reality. Show me something in motion, show me a computer processing something, and then we'll talk. So you can describe the "algorithmic principle" of curiosity? So what? I could probably describe myself taking a shit and barfing at the same time algorithmically if I wanted. These people are so far up in their ivory tower they forget that at the end of the day you have to give instructions to a microprocessor. Stop wasting my time and Get Real.

Curiosity can be deprogrammed - watch fox news! (0)

Anonymous Coward | more than 4 years ago | (#30930112)

The set up was so perfect, someone had to say it!

Note to the deprogrammed - this is not a pro-anything joke, just an anti-fox-news joke.

I read that a bit wrong at first (1)

FalseModesty (166253) | more than 4 years ago | (#30930160)

"his main scientific ambition 'is to build an optimal scientist, then retire.' The Cognitive Robotics professor has worked ..." Woo hoo! The Robotic Cognitive Professor worked! Oh, wait...

All well and good but what about a soul? (1)

BlueCoder (223005) | more than 4 years ago | (#30930178)

Our greatest gift to god will be creating a mind that can believe in him.

Why is it that AI research is always mislead by it's name? Namely they are
too focused on the intelligence aspect of a programmed mind that they
completely fail to recognize it's subjective emotions and motivation that they
should be focusing on.

What is a soul? It's that part of a mind that is able to make a choice. It's
the part of the mind that isn't logical. It's the part of the mind that can
judge something as good and bad. It's has beliefs. It can be informed by
reasoning but it can still choose mysticism over reason. It wants and it
can choose. Behind every mind there is motivation. Sure it's still a
program but it's the one that matters.

Just because you can give an ant mind super intelligence doesn't make it any
less of an ant. It understands more but it is still an ant and wants what
ants want. The reverse of this is a complex soul that can't make sense
of the world around him; we tend to call this autism. Maybe the former is
autism as well.

Most people should be able to agree that psychologists know enough that they
can actually drive a sane person to insanity. Therefore we should also be
able to drive an artificial mind to insanity.

It is not enough to recognize beauty; you have to feel it, you have know and
believe that it is good and right.

Soul != Curiosity (0)

Anonymous Coward | more than 4 years ago | (#30930842)

Something with a soul (eg. human) may not be curious about anything.

And who is to say something that is soul-less can not be curious?

Guys (1)

COFFEESLEEP (1717844) | more than 4 years ago | (#30930306)

I've devised an algorithm that tells me with 100% certainty that this guy's ego is way too far up his ass.

Re:Guys (1)

Jeremi (14640) | more than 4 years ago | (#30930460)

I've devised an algorithm that tells me with 100% certainty that this guy's ego is way too far up his ass.

Random nobody spends 45 seconds skimming an article on the Internet, feels qualified to throw insults. Details at 11.

Intuition (0)

Anonymous Coward | more than 4 years ago | (#30930628)

Isn't Curiosity really 'Intuition'? A part of the brain who's function it is to compress all the data from hearing and eyes to a lower 'bitrate' and to fill in the blanks between sensors by pattern-matching.

Imo, what you need to write to simulate 'Curiosity' is pattern-matching software: taking input from many sources (sensors) and try to find a match between them.

Doobie (1)

Quiet_Desperation (858215) | more than 4 years ago | (#30930644)

And he ultimately addresses the possibility that the entire Universe, including everyone in it, is in principle computable by a completely deterministic computer program."

After which he took another long drag on his joint and said, "It's like our whole universe is inside a single election in a larger universe, you dig? Hey, pass those corn chips over, dude! Now where was I? What? Ah, never mind. Put on Conan. It's his last show."

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>