Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Perl Programming

Parrot Updates 91

BorrisYeltsin writes: "A couple of updates for Parrot are in a recent This Week on Perl 6, most imporantly Parrot 0.03 is out! Get it here , the release notes are here. Also Adam Turoff has got together the Parrot FAQ version 0.2 which addresses some of the more common questions about Parrot and Perl 6."
This discussion has been archived. No new comments can be posted.

Parrot Updates

Comments Filter:
  • polymorphic support rocks!

    gr!
  • The parrot has voomed
  • by Anonymous Coward on Sunday December 30, 2001 @01:04PM (#2764956)
    • Spam (Score:3, Insightful)

      by Matts ( 1628 )
      A large number of perl web sites have been spammed with this. I consider the manner in which it's been done quite rude, as it has in no way been personalised, and is very "spam" like in appearance (i.e. it's saying that DeveloperWork's articles are of the highest quality - well they would, wouldn't they?).

      I'm not disputing the quality of the articles there, just pointing out that this has gone to several places, and even been posted on a few sites. I didn't post it on the one I admin because it was totally impersonal.
      • While it may not have been *wholly* proper, it comes close to my ideal world. It was true, targeted advertising that was unintrusive and informative. Don't beat it up too bad.
  • the faq isnt done yet - click on a link at the top and go no where. the author of it forgot to post link tags throughout the document linking it to the toc at the top
  • by Anonymous Coward
    7. What language is Parrot written in?

    C

    8. For the love of god, man, why?!?!?!?

    Because it's the best we've got.

    9. That's sad.

    So true. Regardless, C's available pretty much everywhere. Perl 5's in C, so we can potentially build any place perl 5 builds.
  • by jsse ( 254124 ) on Sunday December 30, 2001 @01:16PM (#2764998) Homepage Journal
    April Fool's joke [python.org] can become reality. Remember not to make bad jokes next April. :)
  • by krs-one ( 470715 )
    Read more here [perl.org]

    Seems like a cool thing, I don't know much about it though. :)

    -Vic
  • Old News (Score:3, Insightful)

    by mshiltonj ( 220311 ) <mshiltonjNO@SPAMgmail.com> on Sunday December 30, 2001 @01:44PM (#2765075) Homepage Journal
    Parrot 0.0.3 was released [perl.org] way back on the 11th of dec, nearly three weeks ago.

    If you want *new* news on perl/parrot, the latest parrot in CVS is now "fully-functional [perl.org]" (interpret that however you want.)
  • Parrot?! (Score:1, Funny)

    by Anonymous Coward
    Anyone pining for fjords?
  • If you're interested in Parrot, get the version from cvs, and get on the mailing list. There's a hell of a lot of interesting and cool stuff thats gone in since 0.0.3, not least of which is JIT on a few platforms (linux among them). Just check out the mailing list [develooper.com] for details.

    Oh, and if you run an unusual system, then get in contact with the parrot team! They need more exotic systems to get parrot building on!/p

  • by bcrowell ( 177657 ) on Sunday December 30, 2001 @02:04PM (#2765143) Homepage
    I remember seeing a comment by Larry Wall (?) to the effect that Perl isn't appropriate for large programs, which is why Perl isn't written in Perl. But now they say they're writing Perl's source-code-to-Parrot compiler in Perl. Was the original statement due to Perl's relaxed (some would say sloppy :-) approach to types and data hiding, or was it related to performance issues, or both? Will the Parrot implementation perform better for large programs?

    On a different topic, what about compatibility? Their FAQ says that, for instance, localtime will no longer return year-1900. Doesn't this break old code? They say there will be an automated Perl 5->Perl 6 converter, but it isn't going to fix stuff like year-1900...or is it?

    • by Elian ( 10270 ) on Sunday December 30, 2001 @02:22PM (#2765198) Homepage
      The perl parser's not written in perl for efficiency (both speed and memory use) and convenience (yacc is nice for some things, after all) reasons. Parrot's addressing the efficiency issues, and at this point there's more than enough knowledge and tools to get a good perl parser written in perl.

      As for compatibility, your perl 5 code will call routines that return perl 5 compatible values. The perl 6 code will call routines that return perl 6 compatible values. It'll work--it's a simple enough problem to deal with.
      • As for compatibility, your perl 5 code will call routines that return perl 5 compatible values. The perl 6 code will call routines that return perl 6 compatible values.

        Hmm...but the routines have the same name? Say you have a Perl 5 program that you want to convert to Perl 6. Now it's calling a different localtime routine. Doesn't your code break. How do you detect such breakage and fix it?

        • You'll need to change a bunch of stuff to build as native perl 6 anyway--array and hash access syntax is different in perl 6. It's just one more thing to change.

          I'll file a note to add a "potential compatibility issue" warning to things so you can get warned if you call any routine that behaves differently in perl 5 and perl 6. (Off by default, of course)
  • if this was some big software company, this would probably be version 3.0. (internet explorer anyone?)
  • the FAQ has a nice light humor about it. i feel like after reading the FAQ that the maintainers like what they are doing and know what they are doing. well, of course they do! it just makes it feels more so.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday December 30, 2001 @02:54PM (#2765284)
    Comment removed based on user account deletion
    • by Elian ( 10270 ) on Sunday December 30, 2001 @03:24PM (#2765367) Homepage
      The "Parrot Implementation Issues" piece suggests to me that the developer hasn't done any research or have much idea of the field they're programming in. Now, of course, it could be that the FAQ is merely flippant
      That would generally be the case. Most of the folks reading FAQs on this stuff don't have the background for a good discussion on this stuff, and the FAQ really isn't the place for it.
      * Java is not the only stack based interpreter out there - and is far from being a good example of one. Even ignoring FORTH which is a full blown language based around the idea, there's the UCSD p-System,
      Forth doesn't count--it's really not an interpreter or VM as such. And the UCSD p-System (which I did know about) ran on systems that are profoundly obsolete. The base hardware's a bit more capable than the 6502s in Apple ][s. (Much as I liked both)

      One example I didn't mention but should have was Visual Basic, which is apparently a stack-based interpreter system.

      A stack based system has architectural advantages over a register based system. It may be the FAQ author considered them, but it'd be nice to actually see acknowledgement rather than an assumption that such advantages are easily explained away. Register systems have a finite structure which, ultimately, will limit the overall design of the end product and intrude into how efficient it can be. Stack based systems have no such limitations - a stack is a stack.
      And register based systems have advantages over stack based systems. An awful lot of stack thrash is avoided in a register system--with more temp values handy you don't have the extraneous dups, stack rotates, and name lookups.

      And just because we have registers doesn't mean we don't have a stack. We do. (Several, actually) But registers let us toss a lot of the useless shuffling.

      The 68000 emulator on the Mac is somehow proof that register based VMs are the way to go
      ...Ever run UAE on an Intel CPU? You need to get into Pentium territory to get it to look like a damned A500
      The 68K emulation on the Mac is proof that it works and is viable, not that it's the way to go. And UAE's troubles are as much architectural issues (the x86 chip is really register starved) as anything else.

      Parrot's ops are generally not as low-level as machine instructions, so even if the register system was dreadfully inefficient compared to a stack system (and it's not, parrot's numbers are good) not that much time is spent dealing with it anyway. (Though I don't want to trade off even 3% speed hits)

      Maybe all these issues were answered, but if you're going to put together a FAQ, a "Yeah, I thought of it, but those ideas suck, mine's much better, but I'm not telling you why, indeed, I'm going to use the worst possible examples to demonstrate how little research I did into this" answer isn't terribly inspiring.
      This is dead-on. I'll get some more meat into the FAQ.
      Adam might
      ...
      I might, actually. Adam just pasted in my answers to the technical stuff.
      • Comment removed based on user account deletion
      • Since this seems to be serious, please consider the advantages of a mixed register+stack machine. With 3 32 bit stacks and one 64 bit stack (though if you are using FORTH, you can use the FORTH model, and drop the last one at the expense of having a set of double-cell primitives.

        Implementation. The registers (how many?) could be below the base of one stack with an absolute address. So, similar to the 6502 a part of memory could be mapped directly. This is a virtual machine, but you still get advantages be knowing at compile time where the data resides. (This is one of the disadvantages of stack based machines. Everything has an extra level of indirection. OTOH, how many registers should one have? I would propose using the same instruction set for accessing the registers and the stack, but with the stack set only allowed to be accessed via indirection from the stack pointer, and the stack pointer not allowed to go "negative" and get at the registers. The second and third stacks will need to be handled separately. If the absolute stack is limited in size (to, say, 256 entries) then another stack can be placed above it, and allowed to grow down toward it (say this one is also limited to 256 entried) and the third above that one, and growing up again. Say this one has a limit of 512 entries, and is composed entirely of pointers to the heap (Objects?). These sizes are picked out of the air, and would need LOTS of tuning.

        OTOH: consider a machine with three stacks, but no registers. From the example of Forth, a lot of the operations will be devoted to accessing values two or three cells down in the stack. But this has the benefit that one doesn't need to track register allocations. This can be made to work, but one really needs to track stack frames, so one needs to be able to operate in a essentially unpredictable way on items within the current frame. So here one is basically using a set of register operations on the stack (i.e. within the current frame of the stack).

        The reason for hardware registers is that in hardware fast memory is an expensive and scarse resource. So hardware designers tend to allocate it sparingly. But in a software implementation, the registers run in the same memory space as the rest of the program. So they loose many of their advantages. They can, however, save a level of indirection. And that can speed things up if it's on something that is being done frequently. So one alternative is to define the largest stack frame that one wishes to allow for, provide a way for a stack frame to be copied to a fixed location, and then have analogous operations based either on the fixed location, or on a stack frame pointer in the stack. So the stack is a set of modifiable stack frames (though only the top stack frame at any time is resizeable).

        As far as I know, nobody has defined what the "perfect virtual machine" would look like. There have been several attempts, but trade-offs are inherent in the process, and what is best for one circumstance will not necessarily be best for another. And size, as well as speed, is a real constraint.

        However, if one isn't modeling an existing hardware implementation, then the virtual machine should not end up looking like any of the hardware. This is because the trade offs are different for designing hardware and for designing software. (Fast RAM is only one example.)
        .
        • Since this seems to be serious, please consider the advantages of a mixed register+stack machine. With 3 32 bit stacks and one 64 bit stack (though if you are using FORTH, you can use the FORTH model, and drop the last one at the expense of having a set of double-cell primitives.
          This has been part of the design from the very beginning. In addition to a general purpose stack, each of the register types has a stack associated with it so you can push and pop all the registers of a particular type at once.
          The reason for hardware registers is that in hardware fast memory is an expensive and scarse resource.
          That's not the only reason. Bits and memory bus bandwidth are also scarce resources, and a small set of named scratchpad spots allows for significant speedups over fully qualifying everything with a memory location.
          But in a software implementation, the registers run in the same memory space as the rest of the program. So they loose many of their advantages.
          Many, but not all. The stack thrash is a big issue as well and, while there are ways to get around it by allowing references back in the stack, they're generally more awkward than a plain register setup.

          There's also an awful lot of literature on writing optimizers which is geared towards register machines, as that's what everyone's CPU is these days. I've not found much readable literature on optimizing a pure stack-based system.

          They can, however, save a level of indirection. And that can speed things up if it's on something that is being done frequently. So one alternative is to define the largest stack frame that one wishes to allow for, provide a way for a stack frame to be copied to a fixed location, and then have analogous operations based either on the fixed location, or on a stack frame pointer in the stack. So the stack is a set of modifiable stack frames (though only the top stack frame at any time is resizeable).
          Congrats, you just described a register-frame system. Which is what we already have. :)
          However, if one isn't modeling an existing hardware implementation, then the virtual machine should not end up looking like any of the hardware.
          One is always modelling hardware. That's what this stuff ultimately runs on. You can't ignore the underlying system if you want really good performance. And while a good chunk of our userbase is running on register-starved systems (x86) a lot aren't. The design's not got any serious disadvantages relative to a stack system on x86, and some good advantages on machines with plentiful registers (alpha, sparc, PPC, MIPS, IA64) so it's a win.
      • And register based systems have advantages over stack based systems. An awful lot of stack thrash is avoided in a register system--with more temp values handy you don't have the extraneous dups, stack rotates, and name lookups. And just because we have registers doesn't mean we don't have a stack. We do. (Several, actually) But registers let us toss a lot of the useless shuffling.
        This is a weak answer. The obvious and correct response is that just because the program says rotate stack, you don't really have to do it, just remember that all data accesses after this point have to be shifted one index. Think of how efficient APL is at knowing just what sections of an array need to be computed and how they are: the flip operation never really results in the array being flipped, just noted that data access now begins at the other end of the array.

        I am a very big fan of stack machines and I think that an intelligent implementation can be built that would blow the doors off any finite register machine: think of the cache hits you can have operating on such local data instead of the random access model. Two stacks can also be used, one for short lived data and one for long lived data. There has also been a ton of research on variable lifetime and scoping analysis to know where to keep data.

        Why does everybody always want to make dumb interpreters and not something more intelligent?
        • I am a very big fan of stack machines and I think that an intelligent implementation can be built that would blow the doors off any finite register machine:
          Then go do it and prove my design wrong. I think you wildly underestimate the amount of complexity you're contemplating.
          Why does everybody always want to make dumb interpreters and not something more intelligent?
          Not everybody does. But just because you disagree with the design doesn't make it dumb.
          • I wasn't calling the design of the interpreter dumb. I see how it could be taken that way. When I said dumb, I meant as in "dumb terminal" (somwthing without knowledge). The interpreter just seemed to look at an operation and do exactly what asked of it, not optimizing on the fly. A language called K from http://kx.com is a smart interpreter in the sense that it does semantically what it is asked, but not necessarily how you ask it to be done and it will blow C code out of the water many times. It is based on APL coolness and tricks, like not actually calculating the entire matrix being operated on, but only the section that is examined at the end.

            sorry for the confusion.
            • I wasn't calling the design of the interpreter dumb. I see how it could be taken that way. When I said dumb, I meant as in "dumb terminal" (somwthing without knowledge). The interpreter just seemed to look at an operation and do exactly what asked of it, not optimizing on the fly. A language called K from http://kx.com is a smart interpreter in the sense that it does semantically what it is asked, but not necessarily how you ask it to be done and it will blow C code out of the water many times. It is based on APL coolness and tricks, like not actually calculating the entire matrix being operated on, but only the section that is examined at the end.
              Currently the interpreter does do what you ask it to, and nothing more. (Or less) The compiler is left with the task of emitting reasonably clever code.

              That doesn't mean we haven't taken lazy evaluation into account, but there are limits to what we can do. All the languages that are likely to run on the parrot engine have the potential for a lot of side-effects, and runtime dependency checking for that stuff usually ends up costing more than you gain.

              There are some areas where this sort of partial and lazy evaluation get you big wins, but they're pretty specialized areas. APL and Fortran will probably always be faster in their areas of particular specialty.

    • The TAO VM (the VM which is used in the "new" Amiga OS) is register based (with an "infinite" number of registers).

      So the 68k emulator is not the only VM to be register based..

      Now, I've never seen interesting paper which compares both approach, I don't think that one approach is necessarily better than the other..

      PS (off-topic):
      The story in the acme journal linked in the FAQ is really impressive..
      http://use.perl.org/~acme/journal
    • The Java VM is not a "stack based interpreter". First of all, it's not an interpreter. More importantly, the Java byte codes are restricted in such a way that they are trivially converted into a register machine. In fact, the "restricted stack" approach that both JVM and CLR take has all the efficiency advantages of register machines and all the compactness and simplicity of stack machines. Perl6's method is likely much inferior.
      • Yes, the JVM is an interpreter. And yes, it is stack based. And no, there is no trivial transformation from stack to register based architectures. (There are transformations, but they're not trivial, and the result is often not great) And the restricted stack approach that the JVM and CLR take does not have both the efficiency advantages of register machines and the compactness of stack machines. You can't do that.
        • The JVM is a virtual machine specification. It's as much an "interpreter" as a specification of the Pentium instruction set is an interpreter, or as a photograph of a giraffe is a giraffe. The specification is stack based, but high performance implementations of it usually are not (they use the stack in the same way as any register-based instruction set, and as Perl6 does--to store activation frames and local variables). And, having implemented the transformation, I assure you that it is trivial--it's a couple of pages of code. What is hard is to assign the large number of registers you get that way to the few number of registers some machine has. But, guess what, that's the same problem you face with a register-based virtual machine, since you don't know ahead of time how many, or what kind of, registers any particular implementation can most efficiently support.

          A different way of looking at what the JVM and the CLR do is that they are a binary postfix representation of the Java parse tree. It doesn't make register allocation any harder or easier, it simply draws the line between source->byte-code->execution at a slightly different point. And it makes sense to draw the line where the JVM (and the CLR and the Smalltalk VM and lots of other virtual machines) does because register allocation is target machine specific.

          Arguing theoretically against the architecture of some of the most efficient and successful virtual machine based systems is futile. The proof is in the pudding. Current Perl interpreter performance is noticeably worse than many other interpreters (even if some Perl primitives like regular expressions are quite fast). Let's hope that will improve in Perl6 implemetations. If you can do that with a register based VM, great.

    • from The design of the Inferno virtual machine [bell-labs.com] by Phil Winterbottom and Rob Pike
      our experience with the implementation of a stack machine in the AT&T Crisp microprocessor [5] leads us to believe that stack architectures are inherently slower than register-based machines. Their design lengthens the critical path by replacing simple registers with a complex stack cache mechanism. In other words, it is a better idea to match the design of the VM to the processor than the other way around.
  • First, let me admit a bias: I am madly in love with an APL like language called K [kx.com]. It has destroyed some unreasonable language biases I had in the past by being simply an amazing language. I have become used to the hyper operator concept, even though in K functions are called verbs and the hyper decorations are called adverbs.

    My question is about how hyperness is applied to hyperness and how you make hyperness apply to only one side of the operator. Here are how you do things like this in K; are these possible in the new Perl? If not, these would be monumental omissions.

    ' is read each
    \: is read each left
    /: is read each right

    Given the lists x:1 2 3 and y:10 20 30 and value z:100

    x+'y is 11 22 33
    x+\:z is 101 102 103
    z+/:y is 110 120 130

    but these can all be done implicitly, so you really do not need the decorations: x+y, x+z, and z+y are fine. This allows you walk down only a single list, while aggregating results.

    x+\:y is (11 21 31; 12 22 32; 13; 23 33)
    x+/:y is (11 12 13; 21 22 23; 31 32 33)

    Then you can also walk down the left then right sides of list by \:/:(left each right each).

    You can walk down a list unarily: / is read over and +/x is 6, or ': is read each pair and +': is 3 5.

    There are other adverbs and they can be combined to modify each other arbitrarily. This winds up being an incredibly powerway to write programs. It removes the programer from the burden of flow control and compacts code enormously. Think of removing all the loops from your code and replacing them with a couple charcters, instead.
    • Perl6 is not going to be as explict about this as K apparently is. In perl6 we already know the context of variable because of all the $@% stuff. So we don't need to specify left/right variations of the same operator.

      Your examples become:

      @x = (1, 2, 3);
      @y = (10, 20, 30);
      $z = 100;

      @x ^+ @y is (11, 22, 33)
      @x ^+ $z or $z ^+ @x is (101,102,103)
      $z ^+ @y or @y ^+ $z is (110,120,130)

      # There is probably a more clever Perl6 way
      # also correctness is not guranteed
      map { ($_ ^+ @y) } @x is (11 21 31; 12 22 32; 13; 23 33)
      map { ($_ ^+ @x) } @y is (11 12 13; 21 22 23; 31 32 33)


      There is no @a ^^+ @b for the last two examples, but you might be able to defined your own operators and have the hyperoperator work on it.

      But Perl6 does not seem to want to go as far as your language K does. However, modifying the syntaxt of Perl6 on the fly is going to be VERY easy. Something like:

      use Ksyntax;

      @a ^\+ @b; #like x +\: y
      @a ^/+ @b; #like x +/: y

      no Ksyntax;
      #back to normal


      -LL.
      • Thanks for the answer. I think it is great that some other languages are finally learning about bulk data operators and adverbs.

        The map examples show that Perl didn't quite go as far as I would have liked to see it go in that direction. These bulk operators are one of the features that make K a far faster language that C many times.

        Is there a reason these hyper meta-operators only scratched the surface to the concept? Not trying to pick on Perl, but it just seems that other people have these examples as great ammunition to the argument that Perl is just a kitchen sink approach to a language: tossing the surface level of everything imaginable intoa language without trying to think of the underlying concepts and unite them, for example shouldn't map and hyperness be the same thing?
        • Is there a reason these hyper meta-operators only scratched the surface to the concept?

          Two possabilities come to mind:
          1) My answer was limited by my knowlege of how far Larry Wall is going with this stuff.
          2) Perl guys are getting gun shy about creating line-noise like syntax.

          What I want to see is explicit iterators. Then you would have the power/freedom to create a syntax enhancement module to create line noise syntax.

          What I mean about explicit iterators is very Object Oriented idea like:

          my $iter_a = @a.iterator;
          my $iter_b = @b.iterator;
          while ( !$iter.atend and !$iter.atend) {
          &nbsp&nbsp$iter_a.next += $iter_b.next;
          }
          With iterators alot of the large list I walk in my day job, could be handled much more intelligently and efficiently. We use POE and don't want one state to block others for too long.

          You could build a syntax modifying module to emulate K very efficiently.

          • Why would you opt for this explicit iterator syntax? Yes, this should be possible, but the language should really be much smarter than this.

            This construct is so prevalent, that it should be more concise and optimizable. The example is of a mostly effect free statement. There is a single assignment necessary, back to the iter_a. That is probably not sematically needed and just used for "efficiency". But the iterator code requires 4 additional assignments, the two original iterator creation assignments, then the implicit state changes in the next function. This destroys the ability for optimization, complicates the code and increases source bloat both increasing the probability of bugs.
            @a+@b
            or if you like
            @a.+(@b)
            describes the same process and seems far superior to the 5 line alternative.
            • You wrote:

              Why would you opt for this explicit iterator syntax?

              This was my motivation/answer:

              With iterators alot of the large list I walk in my day job, could be handled much more intelligently and efficiently. We use POE and don't want one state to block others for too long.

              I will try to expand on this. POE is an Perl module that gives the programmer a framework to write state machines. It can be used to make your program behaive alot like a multi-threaded application (ie. doing two things a once). I does this by slicing your work into descrete chunks (re: states). However, POE is a cooperative multitasking system. If your state takes alot of time, while something else (like network traffic) has to be handled, that other thing will be ignored/dropped/whatever.

              The Point: It would be helpful to be able to do array/hash manipulations efficiently (not using indexing or copying out a list of keys) AND to use arrays/hashes X number at a time, then go do something else, then comeback and do X more manipulations.

              Further, iterators map well to how we interact with external data sources. Imagine tie()ing a hash to a SQL DB view, and using hyper-operators or iterators on big data.

              BTW, this is my point. I work with big datasets. Each machine we have keeps statistics, by the minute, of the OS and processes. Other administrative machines sum up all the data collected for a cluster of machines. We aggregate that data into a machine for clusters of clusters. And finnaly we have a machine with the aggregates of all our machines. Roughly 3,000 machines world wide. Our processes must be able to both aggregate data from its children, but also respond to its' parent's queries at the same time. Iterators would really help.

              Hyper-operators are cool for doing all your work in one fell swoop (even if it takes 10 seconds). Iterators are good for walking a list, stopping for some reason (like the data item you just iterated over required immediate action), then getting back to work on the list. If I had to do a (0..1_000_000) $a[$i] indexing or keys %hash for a hash with 500_000 keys, I could run out of memory or wait 60 seconds just for the copy to finish.

              Here is a good quote: "Threads are for programmers who don't know how to program state machines" -Alan Cox.

              Later.

              • Yes, I see your point about wanting to stop evaluation periodically; laziness is a very important concept to allow. POE also sound very cool.

                I do not use nearly as much Perl as I should, so I cannot comment as to specific Perl usages. However, you should never run out of memory with hyper operators, since it is too easy for the language to play swap games itself. This is my opinion, at least.

                I too use very large data sets at work tied to databases. I do massive text processing tasks working as a core developer at one of the largest search engines on the web. For this taks we use a database called KDB.

                <PLUG>
                KDB [kx.com] is entirely based around bulk data operations. This database is written in K [kx.com]. It is the fastest database I know of, doing 50,000 transactions a second on a 100 million records [kx.com].
                </PLUG>
        • Perl is evolving. The introduction of the hyper operators is an incremental change from the status quo. If they work out as well as they seem to, there's no reason why people won't be able to experiment with, and then propose for adoption, more general versions.

          On the other hand, every system for denoting iteration and recursion eventually runs out of steam; it's just a question of how soon. So I don't think it's a problem that K goes further than Perl; remember that K may be intended for great things, but Perl has already achieved great things, and you don't want to kill the baby while trying to improve its training...

  • Perl Documentation (Score:1, Insightful)

    by Anonymous Coward
    Perl has served its purpose. Sad to say, but its day is done. The time has come for Perl to yield the spotlight to newer, better scripting languages. The reasons for Perl's imminent demise should be obvious to anyone with an ounce of common sense. Nevertheless, the main causes of Perl's lack of fitness deserve to be recounted here:

    Perl is emphatically not an object-oriented language. Perl's OO features were crudely hacked in after-the-fact. This unfortunate compromise is the equivalent of trying to bolt an internal-combustion engine onto a stagecoach instead of designing an automobile from the ground up.

    Too many simple tasks are pointlessly complicated. Take the simple example of creating an array whose elements are arrays. Not only does the developer need to use additional inner brackets for each element, but they must also remember to use the unique @{$a[1]} syntax when referencing. Why all the extra steps? Who knows.

    Perl is notoriously impossible read and maintain. Walk into any bar frequented after-hours by veteran developers and you'll hear story after story being swapped about having to decipher brain-crushing lines of text like :" (my @parsed =$URL =~ m@(\w+)://([^/:]+)(:\d*)?([^#]*)@) || return undef;". This unreadability is in part the result of the fact that:

    Perl attempts to be all things to all people and ends up being second-rate at everything.Perl is widely known as the "duct tape of the internet", and it performs superbly in this role. However, just as you cannot build a house out of duct tape alone, so attempting to turn a language that was originally developed for scrpiting brief, handy utilities into a do-all, be-all programming language will only result in the buggy, bloated, "write-only" mess that Perl has become.

    Subroutine signatures, orthogonals, method access, data inheritance: this list could go on and on. But there is no real need. Its is now clear that Perl is doomed. At this very moment, Perl 6.0 is being cobbled together, with bulletins about the myriad upcoming features of the new version being issued with titles referring to the Biblical Book of the Apocalypse, the favorite text of messianic streetcorner lunatics. There is no better indicator of the deranged states of mind of the developers behind Perl than this unfortunate choice of imagery. Software developers with any interest in future employment/relevance should sieze this opportunity to attain fluency in Ruby or Python and donate their Perl books to the History Department of their local University.

    • 1. Perl is emphatically not an object-oriented

      Implied assumption - "Pure" OO programming is always better for all problem spaces. OO programming while useful is not the end all, be all.

      C is not orbject-oriented and many of the most important applications are written in C, why is that.

      I might add that Perl has the fantastic advantage of being ambidextrous, allowing the user to choose the best implimentation for the problem at hand.

      I find this critism of Perl interesting since one of the special promises of OO was to facilate reusable code modules. Perl's CPAN module archive has been very successful in delivering code reuse on a large scale. I have always been amazed at the scope and quality of Perl modules available on CPAN. Also interfacing to these modules are often simple and straight forward.

      2 Too many simple tasks are pointlessly complicated

      Perl's primary strength is making hard thing easy and impossible things possible. The example you mentioned about list of lists is a no brainer if you understand the syntax. And what language doesn't have its own "special syntax".

      Perl has a collection functions that make hard things easy. Some of my favorites are:

      - grep, map, sort (implied loops)
      - symbolic refernces ($var = 'temperature'; $$var = 50;)
      - variable string interpolation, ($str = "Today's date is $date";)
      - pack and unpack.

      etc.

      3 Perl is notoriously impossible read and maintain.

      Perl is no more difficult to read then C, C++, Java or Python. The example you provided has more to do with regular expression and pattern matching then the Perl syntax. Regular expressions are extremely powerful often replacing dozens of lines code with a single line. Other languages implement regular expressions (such as Python and Java) and if they are employed within these environments they can be as equally obtuse.

      Also, to stenghten you point you cobble several operations together. I can also write a complicated C or Java line of code that relies on several levels of precedence and implied functionality.
    • by hey! ( 33014 )
      Moderators -- don't mod somebody flamebait because you disagree with them. The above was offtopic, maybe.

      I happen to disagree with a lot of what the writer says, but I think he was making substantive comments, to which I think I can respond.

      Perl is emphatically not an object-oriented language.

      Perhaps true depending on the criteria you use, but over the years I've come to care a lot less about this kind of taxonomic issue and more about getting things done without a fuss. The question I'd prefer to ask is, does it support object oriented design? Personally, I'd probably go with Python or Java for a project requiring large scale OO design. However, serious Perl hackers get by pretty well in Perl.

      Too many simple tasks are pointlessly complicated.

      True, but many complex tasks are very easy in Perl too. A lot of language flamewars unconsciously adopt a desert island scenario: if you had to do ALL your work in one language, what would that language be like? Well, you don't have to get by with one language.

      Perl is notoriously impossible read and maintain.

      This is simply untrue. Speaking as a very occaisional and non-expert Perl user who has had at various points needed to maintain some fairly complex perl CGIs, well structured Perl is quite easy for the non-Perl guru to maintain. This is not to say there aren't subtle issues (e.g. confusing scalar and array contexts) that can't bite a newbie trying to add substantial new functionality to Perl software, but when code has been written by a Perl expert and it is well structured, Perl is in fact very easy to maintain.

      If there is a kernel of truth to this myth, it is perhaps that Perl tempts the inexperienced to create obfuscated code. However, an experienced coder with good habits will produce highly maintainable code.

      Perl attempts to be all things to all people and ends up being second-rate at everything.

      First of all, Perl beats anything else I've tried hands down for doing filter type programs (i.e. transforming input streams). When you do a lot of this kind of thing, it's very convenient to have a tool that spans the range of complexity from things you would do in awk to things you'd use lex and yacc for. If this is the kind of work you do, then Perl is for you. If this kind of work is not what you do or is just a very small part of what you do, then Perl will probably seem somewhat pointless to you. However, you shouldn't expect your experience to be universal.

      Secondly, IMO the fact that a language is general purpose (like Perl) doesn't mean it has to be the best, or even a very good solution for every kind of problem you can imagine. There are some languages out there which are fairly good for a wide variety of problems (Java comes to mind); that is an important niche. However there are niches for languages that are excellent at one or two things. These still need to support styles of work that may not be their strongest suit, however, because real world programs have to address a mix of issues.
    • Perl is emphatically not an object-oriented language.

      And the biggest problem with Python is that it is too fundamentally object-oriented. Seriously. Lack of OO features in non-OO languages is a classic newbie complaint.

      Too many simple tasks are pointlessly complicated.

      You could argue the same for Python. In Perl, you can check for the existence of a file or use regular expressions with built in operators. In Python you have to import a module and work with the supplied API, which is much awkward than the Perl way. Both this argument and the "impossible to read and maintain" one can be leveled against *any* language, depending on the examples chosen.
    • Yes, the language that evolved in an essentially smooth line from Perl 1.0 to Perl 5.8 is indeed coming to the end of its active development and entering the comfortable retirement of maintenance.

      But Perl's creators and users aren't ready to quit.

      That's why there's a Perl 6. :-)

  • A very neat thing would be to support the Parrot interpreter in web browsers.

    It would be a nice replacement for Java. I'd just love to write client-side web applications in Perl.


  • Would a brief line of description saying what Parrot actually is go amiss?

    I assume we're not talking about the next stage of evolution in brightly coloured birds...

    Cheers, Ian

  • Interesting as these detailed implementation aspects are, I'd like to take a moment to step back and look at the potential of, er, Parrot-like developments.

    Linux as a platform won't continue to grow relative to .NET and Java unless it includes a VM. As we know, cross-hardware platform distribution issues are already affecting PPC and ARM users and this will get worse as small non-x86 devices spread.

    Linus should therefore make a New Year resolution be to 'anoint' a VM as the future target platform and encourage libraries to be built around it. Parrot could be this VM, but from this point of view Parrot doesn't offer any fundamental technical advantage over the Java or .NET VMs, which brings me on to my next point.

    While code distribution requirements alone are sufficient to justify a 'Linux VM', there is another very interesting potential benefit which I haven't seen discussed yet. This applies specifically to Open Source and the principles behind it. This is that a VM could be implemented where the source code and compiled code were semantically equivalent. This means, of course, that all that ever needs to be distributed is the 'compiled' form and, by its very nature, this code is always open.

    This is almost the case with Java bytecode, in that decompilers such as JAD can (usually) produce editable code from compiled .class files. However, comments (and layout) are not stored in the bytecode and this is starting to emerge as a significant problem for advanced development tools. For a brief illustration, consider a problem with IBM's new Eclipse IDE. Eclipse extracts Javadoc-style comments from source code that describe methods, parameters and other API aspects. It makes these available as tool-tip-like hints when you start to code a call to a particular method. However, commercial code is not distributed as source but as class files with accompanying Javadoc-generated HTML files. Although, between the two, all the information that Eclipse needs is there in principle, in practice it is hard for it to put the pieces back together and use them.

    Perhaps a better example of sourcecode equivalence goes back to the old days of home computers with interpreted BASIC. Most people will remember that BASIC code was 'tokenized', including keywords, blank lines, punctuation and comments, in a way that could be both edited and executed. Unfortunately, the spread of this useful model was halted when PCs became big enough for 'real' compilers to run.

    What's needed, therefore, is a more sophisticated version of the old tokenized program representation, and such systems have been developed for Scheme (and FORTH, I think), usually known as Abstract Syntax Tree interpreters, in contrast to bytecode interpreters like Parrot's current implementation. There may be some middle ground here - I have a few pointers to research in this area, such as Anton Ertl's work [tuwien.ac.at].

    Now, having solved Linux's portability problem and given a big boost to open source, we could perhaps rest there with some satisfaction. However, I'd argue that there's one more important requirement to address and this time it will be one that is familiar to programmers already, if mostly only those working with languages like LISP and Scheme. This is the provision of dynamic reflective capabilities, meaning the ability to treat programs as data.

    They say that any large C application includes an interpreter for a higher-level language. In finance, we see applications supporting the entry of complex formulas; in CAD/CAM people build parts which are themselves programmable such as a parameterizable aircraft undercarriage. These features turn users into programmers, but programmers working with a high-level, specialized language.

    LISP is a much better language to build such applications (which is why a lot of AutoCAD is in LISP) since LISP programs are themselves data structures that can be added and played with at run-time. Java, .NET, Perl 5 and Python have some limited features in this area, mostly to do with loading code dynamically and supporting development tools, but they are awkward and ugly in the extreme. There is demonstrably huge potential for a mainstream language with this level of flexibility - any sophisticated business will be able to come up with examples like those above.

    So, to wrap up, it looks like others have left the goalposts wide open for us and there is a significant opportunity to exploit. It will be interesting to see whether this Parrot can do more than mimic other platforms.

If you think the system is working, ask someone who's waiting for a prompt.

Working...