Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Movies Media

LoTR , Linux, and Database Management 128

minus23 writes: "Very interesting article over at Digitalanimators.com, talking about some of the challenges faced by the crew working on the second installment in the Lord of the Rings Trilogy. Interesting bits include managing an off-site database of 45TBs, Linux workstations from IBM, 1400 processors, and the animation methods to be used on Gollum. It's a good thing. :)"
This discussion has been archived. No new comments can be posted.

LoTR , Linux, and Database Management

Comments Filter:
  • "Labrie reported that the facility will also need to expand its render farm from 400 processors to 700."

    All that power and it might, just might, look as good as my one brain imagined it.

    -H
    .
    • Heh. Yeah...
      And JRR Tolkiens one brain imagined with the power of those 1400 processors, plus the power of the 230 graphics artists, dedicated asset management company, production team, audio guys, actors, and so on :)

      Wiered to think...

  • Don't overlook the last sentence of the story - that Labrie has since left the company.

    Could someone possibly go on to "bigger and better" things after that? :-)
  • by saphena ( 322272 ) on Sunday July 07, 2002 @07:10PM (#3838432) Homepage
    When I read LOTR many years ago, when computers were hard to come by and certainly not used for frivolity such as generating fairy tales, I had no trouble whatsoever "seeing" Gollum and all the other characters just from the textual descriptions.

    Does all this computing power mean we've advanced?
    • It's just for the kids these days. Can't push those few brain cells of theirs that are supposed to bring books to life, so they harness a few rooms of whirring computers to do it for them.

      Now if we just used those things to do molecular interaction models for AIDS vaccines, maybe Tolkien wouldn't be spinning in his grave right now.
      • Now if we just used those things to do molecular interaction models for AIDS vaccines, maybe Tolkien wouldn't be spinning in his grave right now.

        This rests on two dicey assumptions:

        - Tolkein would not have approved of the movie. Most of the diehard Tolkein fans I know thought the movie made some annoying errors and changes, but felt the overall product was stunning. But we're talking about visuals, and I can't see any reason for complaint. It wasn't anything like how *I* imagined Middle Earth, but I thought it was just as good.

        - Molecular interaction models are actually worthwhile. You do not simply fire up lots of computers and find vaccines. It takes accurate models, and real science, and years of theory and benchwork. It's amazing how many people here think computers are going to make traditional science obsolete. Believe me, there is lots of money being spent on this field. There's no reason why using 400-700 processors, paid for by private investors, is a "waste". By the time we see really worthwhile results from de novo computational drug design, that render farm will be worth $400 on eBay.
    • by palo0019 ( 120416 ) on Sunday July 07, 2002 @08:47PM (#3838764) Homepage
      Thanks grandpa! Tell me the story about how you walked to school 50 miles in the snow barefoot again!
    • The movies are a supplement to the books in this particular case.

      Does all this computing power mean we've advanced?

      It means our technology has advanced, yes. Is using technology for art and entertainment frivolous? I think not. We, as humans, are creative, and using technology to exhibit this creativity is in our nature.
    • I asked the same question myself when my 11 year old son came home after watching the movie.

      He started reading the trilogy when he was 9 and hasn't lost a bit of energy to read it again and again. He was (is) absolutelly fascinated by Tolkein's masterpiece. I was very surprised when he told me how disappointed he was with the movie. Explanation? Very simple and sincere - someone else has completely ruined the world my son was imagining, creating and dreaming of for over two years - his words.

      Nothing helps now - telling him about the freedom of artistic impression/expression, amazing technology that made all this possible, nothing. He can't start reading LoTR anymore.

      I don't know, seems too much - doesn't help even knowing that most of it was engineered on Linux.

      • Explanation? Very simple and sincere - someone else has completely ruined the world my son was imagining, creating and dreaming of for over two years - his words.

        If I were you, I'd be much more concerned by my son's habit of speaking in the third person.

    • Movies don't replace books. No one likes reading more than me. I've read LOTR more than 25 times and I will be reading it again. But I still like the movies. They are my favourite movies to watch - because of the books. So yes - we have advanced. It's amazing to see that Peter Jackson brought these characters to life exactly the way I imagined them.
  • "The problem with Linux is that it's an open source system, so if you are having issues or difficulties with its stability, it's like pushing on a rope; there's no single vendor to deal with. You have to be self-deterministic in terms of how things work. You have to make your own choices and do your own tests on motherboards, graphics cards, applications, operating system releases, all those kinds of things."

    Call up any vendor. Tell them their systems are unstable out of the box. Think they're gonna say something like, "oh, yeah, just tweak this little setting...". I don't think the quote above is very logical; no vendor is going to be that helpful with stability issues. Maybe "stability issues" was just a poor choice of words?
    • On a consumer level I would agree with you; I doubt Dell/HPaq/IBM etc would bother with your average home user on stability issues. On the other hand, this is a very high profile client. It wouldn't suprise me if they had dedicated staff available to help with any issues, stability or otherwise, that came up.
      • Comment removed based on user account deletion
        • I have not found this to be the case with enterprise vendors and customers who pay for real support. When you have access to the vendor's engineers (not just the front-line tech support drones), you can get answers to problems that would completely stump an outsider.

          If you ring up Sun with a Platinum support call for an E15K, I can pretty much guarantee that they won't start by telling you to "restore the system".

          The main advantage to having a single point of contact for this sort of support is that you have a better shot at accessing the expertise (though usually indirectly) of the primary maintainers of a given piece of code. IBM is probably well equipped to deal with a wide range of Linux problems, but there will definitely be times when the best resource is someone at SGI, HP or some random university. This advantage is largely mitigated by the widespread availability of sourcecode, but it could still be significant when you need answers right now.
          • I have not found this to be the case with enterprise vendors and customers who pay for real support.

            Go away. This is SlashDot. We run overclocked AMD boxen in our bedroom, and pretend like it makes us system administrators. We have no idea what "enterprise vendors" might be. We just know that Cisco is too fucking expensive, and that IBM is a god damned joke.
              • This is SlashDot [...] We have no idea what "enterprise vendors" might be. We just know that Cisco is too fucking expensive [...]

              Perhaps you should read this [slashdot.org] once more, and think again who's the fucking moron.

          • ... there will definitely be times when the best resource is someone at SGI, HP or some random university.

            In the same city as Weta Productions (the studio where LotR is made) we find Victoria University,
            Whitireia Polytechnic, The Open Polytechnic of New Zealand, and Central Institute of Technology.
            (I might have left out a few).
            All of these tertiary education institutions have good IT departments and techies who would pay to be involved with Peter Jackson's project.
    • by Zeinfeld ( 263942 ) on Sunday July 07, 2002 @08:34PM (#3838695) Homepage
      Call up any vendor. Tell them their systems are unstable out of the box. Think they're gonna say something like, "oh, yeah, just tweak this little setting...". I don't think the quote above is very logical; no vendor is going to be that helpful with stability issues. Maybe "stability issues" was just a poor choice of words?

      Why is it that every time someone with real world experience of running Linux on a large scale talks of a problem the response is always that they must be either mistaken or stupid?

      Fifteen years ago you could have made the same coment about running large scale UNIX clusters. Sure you could buy 64 RISC workstations and configure them in a farm, but you would end up rebooting a machine at least once an hour - I know because thats what I was doing fifteen years ago, only with rather more processors.

      Experience of running a single machine or a small cluster of office or university machines is not applicable to running large scale systems. If you have a system that is using multiple processors in a single computational task you have to have both software that is designed for fault tolerance and a very high level of basic reliability. If you have a render wall of 256 processors and each one in standalone mode runs for a week without a crash you will end up dealling with a system crash every 40 minutes, most likely more frequently due to interactions between the machines.

      This type of processing is the reason people used to pay a hefty premium for systems from folk like DEC who had lots of experience filling a room with machines and getting them to work reliably. Today that ability is the only thing keeping Sun afloat.

      • This type of processing is the reason people used to pay a hefty premium for systems from folk like DEC who had lots of experience filling a room with machines and getting them to work reliably.

        Perhaps you should tell that to Google, who seem to have realised you can make Linux work stably enough to run a cluster of 10,000 machines. I'm not saying there's no place in the world for commercial Unix, but the single vendor argument was always weak and remains so. If I pay Red Hat (for example) the same amount of money I pay DEC (Compaq/HP, whatever), there's no reason to expect I won't get the same level of support.

        Separately, there's the consideration of whether I'm better off paying DEC/Sun/X this enormous chunk of change for their premium "we don't randomly close your tickets" support level vs. just supporting my large cluster in-house. Clusters are, oddly enough, the place where this comparison leans closest toward the in-house argument - hundreds/thousands of sets of identical hardware means you only have to solve the hardware/software compatibility issues once, only have to keep one type of replacement hardware around, etc.

        If you have a system that is using multiple processors in a single computational task you have to have both software that is designed for fault tolerance and a very high level of basic reliability.

        Actually, part of the point of clustering is that you don't need enormous levels of fault tolerance. You only need the systems to be as fault-tolerant as the rate at which you can replace them (though, sure, it's nice to have them quite a lot more fault-tolerant than that).

        If you have a render wall of 256 processors and each one in standalone mode runs for a week without a crash ...then you have some incredibly unstable software. This isn't a "designed for fault-tolerance", it's not even normal - it's less stable than your average Windows-based desktop system. It's fallacious to use this example to attempt to support your arguments.
        • Perhaps you should tell that to Google, who seem to have realised you can make Linux work stably enough to run a cluster of 10,000 machines.

          Google's achievement is not a trivial one with any O/S platform.

          The point you seem to be deliberately missing is that running large clusters of processors is a non-trivial task, one that traditionally people have paid premium prices for.

          Notice that nowhere in the article did I say that 'Linux can't do this'. In fact my own company has switched to using Linux for certain mission critical clusters. However the engineering required to do that is distinctly non-trivial and certainly not an out of the box configuration.

          what I was arguing against was the slashweenie attitude 'of course this is possible, in fact it is trivial'.

          Actually, part of the point of clustering is that you don't need enormous levels of fault tolerance

          You learned that in your 'theory' class eh? Well the practical class teaches you that you need both fault tolerant software and a pretty high level of basic stability. The problem being that 'redundant' designs with zero common points of failur are much harder to build in the real world than on paper.

          • You learned that in your 'theory' class eh? Well the practical class teaches you that you need both fault tolerant software and a pretty high level of basic stability.

            Please, try not to be so patronising. No, I didn't learn it in theory class, it's common sense. If by "pretty high level of basic stability" you mean "machines don't need rebooting once a week", you are of course right. If you mean "machines must need rebooting less than once a year", well, that'd obviously be lovely, but as I'm sure you'd be the first to admit, it's not really needed. Since the latter's what I (and, I believe, most people) define as a 'high' level of stability, you don't need this high level of stability in your cluster.

            Obviously, building (more importantly, maintaining) clusters (running Linux or anything else) isn't trivial, but it's wrong to make it out to be one of the Black Arts.
            • Hey if I post something that starts off with the phrase 'I have built systems this big' and then you go sniping at the details in a patronising manner then prepare to be patronized.

              If you mean "machines must need rebooting less than once a year", well, that'd obviously be lovely, but as I'm sure you'd be the first to admit, it's not really needed.

              If you have 256 machines and the average uptime is only a year then you are going to be rebooting a machine almost every day.

              You need the average uptime to be rather higher than that if you want the system as a whole to function reliably.

              The main problem is that most O/S are not written well from the point of view of recovery when a peer or a server goes down. You are very likely to find that a hardware failure at one node causes a ripple effect as other nodes that were communicating with it either time out in an inconsistent state or work from a divergent dataset.

              This is why under the old VAXCluster system the system had built in the somewhat counter intuitive notion that when a node lost sync with the cluster it should simply halt rather than attempting to continue and propagate an inconsistent state.

              And no, high levels of stability are five nines which works out at 5.6 minutes of downtime per year, not a reboot per year.

    • Why do linux ppls h8 admiting that there exists instability? The devil is always windows, a result of the foul taste left in our mouth afer using the Win95 variety of Windows. Win2K has run solidly on all systems that I have installed onto so far. I've had to do restarts, but its not as normal as it was in Win95 or 3.11... I've also had to restart my linux box, and a variety of Unix boxes. Thing is, software has a long way to go before we can guarentee stability as well as useability and extensibility - eg. the ability to do what we want doesn't always fall inside the scope of the Manufactures imagination.
  • The problem with Linux is that it's an open source system, so if you are having issues or difficulties with its stability, it's like pushing on a rope; there's no single vendor to deal with. You have to be self-deterministic in terms of how things work. You have to make your own choices and do your own tests on motherboards, graphics cards, applications, operating system releases, all those kinds of things."

    What the hell? With the amount of money they're spending on this system they can't call Redhat, IBM, or HP? IBM and HP are already shipping them the workstations.

    Give me a break, pick one and run with it - testing motherboards? That's why you have vendors ...
  • by philovivero ( 321158 ) on Sunday July 07, 2002 @07:20PM (#3838467) Homepage Journal
    This story looks like a good excuse for me to share a little elation I have about Databases that are Free Software.

    I've been a Database Administrator and Linux zealot for about 7 years now, and it always got under my skin that there are no good production-quality databases for Linux.

    Then, a couple years back, Oracle, Sybase, IBM, and a few other giants made their RDBMSs available for Linux. So I upped the ante, and started complaining that there were no good Free Software databases that were production-quality for Linux.

    Then, about nine months ago in New Zealand I started talking to a consultant who told me he'd successfully migrated a few clients off of Oracle onto Postgres. At the time, I was incredulous, because I'd previously reviewed Postgres and found it unsuitable for production systems.

    Turns out, my information was outdated (things change FAST in the OSS arena).

    Since then, I've been slowly, carefully, calmly trying to see if Postgres (and incidentally, MySQL) were ready for production databases.

    Turns out, the answer is pretty much YES for Postgres and, sorry folks, still NO for MySQL.

    Postgres is an amazing product. The version I'm running, which is fairly recent at 7.2.1 can create databases based on Oracle-complexity DDL, has good recoverability, stored procedures and triggers, and pretty much everything you'd expect in a full-fledged RDBMS.

    They even have a few of those extra bits that aren't necessary but that some DBAs and DB developers like, such as a built-in language (PG/SQL I believe they call it) and ability to write stored procedures in esoteric and strange languages.

    I've found their query tool (psql) to be the second-most powerful and useful query tool I've ever used (SQSH being the first).

    Amazing product, this Postgres 7.2.1. And from reading the database administrators' mailing list, it's pretty obvious that there are some fairly large-size shops migrating from Oracle to Postgres or even just using Postgres as their main RDBMS.
    • Last I checked, PostgreSQL didn't support raw partitions which can be a critical sticking point. Other than that, it compares very favorably to Sybase ASE, which is what I am familiar with.

    • I think that Postgresql would be rather more popular if there was a Windows version available with an obvious download of a setup.exe or whatever. There are a great number of developers who run Windows on the desktop but who have various Unix or Linux options for deployment, and DBMSs that can run on either are much easier to try out, to develop with, etc. Examples: Oracle, MySQL, Interbase, many others.
      • I think that Postgresql would be rather more popular if there was a Windows version available with an obvious download of a setup.exe or whatever. There are a great number of developers who run Windows on the desktop but who have various Unix or Linux options for deployment, and DBMSs that can run on either are much easier to try out, to develop with, etc. Examples: Oracle, MySQL, Interbase, many others.

        Its not that hard though, once you find the information on how to do it. I'm currently running 7.2.0 (as a service) courtesy of the following link which I found very useful : http://www.ejip.net/faq/postgresql-7.1.3.README [ejip.net]

      • There is a postgresql port, that ships with cygwin.
        It is as simple as clicking setup.exe, downloading the postgresql binary and starting it.
        The big problem is that postgreSQL doesn't run properly on NT as a service by default, you need something like firedaemon to start it.
        The PgAccess GUI is available on windows as well, but it lacks a few features that psql supports.
        Pg doesn't run on Win 9x at all, AFAIK.
      • I see there are responses about a cygwin PostgreSQL, and one that arrived on CD for Windows from somewhere. I already knew that if I would work at it I could dig up a port to Windows somewhere. To be more specific about what would make it more popular:

        * It would need to not require cygwin (though I do like and use cygwin myself) and not feel like a port; it would need to be standalone and feel like a native windows product. For example, with a little admin app, and running out of the box as an NT service.

        * It would need to be promoted and downloadable right there with the unix/linux PostgreSQL.

        Of course the PostGreSQL people are free to support platforms however they like however they like. These are just suggestions for things that could lead to wider use.
    • I agree that postgres is a good solid DB, but it still doesn't have good high availiability features. For example, there's no replication or hot backup stuff, except for some alpha quality things.

      Once high availability is added, though, it will be a serious contender in enterprise designs.
      • Live backups [postgresql.org] under PostgreSQL are easy. For a small database, just run pg_dumpall > backupfile and you have taken a consistant snapshot of all databases on the system without bringing any database down, or blocking any reads or writes.

        If you have a more complex system, you probably want to use pg_dump itself on each database, rather than the wrapper script, so you can chose the dump format that best suits your needs.
        • My understanding of a hot backup is a mirrored db that can take over if-and-when the other stops responding. I'm not saying you can't back your data up (that would be rediculous), but pg_dumpall isn't enough for a hot backup. For instance, it takes quite a while to dump the DBs because every record is pushed out. In a hot backup system, you dump once and then syncronize the differences. This keeps you up to date in (near) real time.

          As I said, there is work ongoing to add this functionality as a separate module (IMO, a good idea, because not everyone will need this), but it's only alpha quality right now.
          • That's why I didn't call it a hot backup, but a live backup - I've heard different things meant by hot backup (ranging from how you described it, to backing up with only a brief shutdown ). I think marketing people have siezed on that phrase, and use it to describe some random feature of their products so that they can lways claim to have it.

            I've only heard the term live backup describe the following: You can dump a consistant set of data without needing to take the database down, or interfere with other processes.

            pg_dump accomplished this. You get a consistent dump because each database is dumped in one transaction, and does not see the effects of other transactions going on in the system.
          • My understanding of a hot backup is a mirrored db that can take over if-and-when the other stops responding. I'm not saying you can't back your data up (that would be rediculous), but pg_dumpall isn't enough for a hot backup.

            Well, Fjord, I suggest reading chapter 1 of a few database manuals. I'm familiar with Oracle, Sybase , Microsoft SQL Server, and Postgres. I've never heard any of these refer to a hot standby as a hot backup.

            Hot backup means you can backup the database while it's running.

            Postgres most certainly supports hot backup.

            As for hot standby, you can easily set such a thing up. DBAs have been doing it for years using transaction log copying. It's not any more difficult than setting up replication for Oracle (if you've ever done such a thing, you'll know it's a nightmare process).

    • Hmmm... what about Firebird? (http://sourceforge.net/projects/firebird/)

      We use it a lot, and are very happy - both with the functionality, and with the speed. Stable, too. And free. And open. Runs in Windoze and Linux.

      What more would you want?

      Ciao,
      Klaus
    • Replication is still unavailible from what I understand. Having a mission-critical application requires highly-availible RDBMS servers. I really dig Sybase HA with the main rep server (or clusters) brokering x-acts to multiple boxen, and if one dies the rep server shuffles off your transactions to one of your hot standbys with no downtime visible to the user.

      What good are highly availible app servers if your RDBMS isn't?
  • Just a thought (Score:5, Insightful)

    by cr@ckwhore ( 165454 ) on Sunday July 07, 2002 @07:24PM (#3838475) Homepage
    Overall, the article was a good read. But, I must point to the following observation..

    "... The problem with Linux is that it's an open source system, so if you are having issues or difficulties with its stability, it's like pushing on a rope; there's no single vendor to deal with. ..."

    The very next paragraph...

    "Weta had just taken delivery of 25 Linux workstations from IBM and Labrie reported that IBM and Hewlett Packard were the frontrunners for additional Linux workstation upgrades."

    Alright, so... what am I missing here? You've got IBM behind your efforts. Whats the problem?

    Perhaps the comment was referring to specific pieces of software, although my experience has been that dealing with a group of open developers is far more useful than dealing with a single inept vendor. When the vendor is full of crap, where else can you turn?

    The first paragraph I mentioned continues...

    "You have to be self-deterministic in terms of how things work. You have to make your own choices and do your own tests on motherboards, graphics cards, applications, operating system releases, all those kinds of things."

    Again, I'm not buying this comment either... afterall, you have IBM behind you! Don't they test the motherboards, graphics cards, operating system releases, and all those kind of things?

    Obviously Linux has been a good solution for them because they're using it. They're having success with it, and its saving them loads of $$ versus using an alternative proprietary system.

    Can't wait to see this installment of LOTR!
    • Re:Just a thought (Score:2, Insightful)

      by MO! ( 13886 )
      Also interesting that at the bottom of the article it states the dude no longer works for Weta. I wonder if it was stress over such a hugely complex system, or a bit of ineptness with some of those complexities as noted by your comments.

    • "You have to be self-deterministic in terms of how things work. You have to make your own choices and do your own tests on motherboards, graphics cards, applications, operating system releases, all those kinds of things." Again, I'm not buying this comment either... afterall, you have IBM behind you! Don't they test the motherboards, graphics cards, operating system releases, and all those kind of things?

      It's about taking control of your own destiny. You do your own testing, not because IBM hasn't, but because you are the one who needs to know it all hangs together and works.

    • Alright, so... what am I missing here? You've got IBM behind your efforts. Whats the problem?

      I've done work with IBM before, and it's not quite that simple. They are VERY strict in their scoping, and while they may have been willing to take on the responsibility of the individual workstations running Linux in this instance, that does not mean that they will do so for the whole package.

      Now, that may not have been the case here, but on 2 separate engagements I've been involved with, IBM said flat out "that's your problem" when dealing with integration issues.

      Again, I wasn't there, so I don't know.

      • I have to agree, IBM is not going to help you with Jack, if you are using the custom packager this guy is talking about. And even if you have a problem with THERE (IBMs) hardware it could be a LONG turnaround time berfore they do anything for you.. It all depends on the type of service contract you have with them.. I once had a a server tape drive take a month and a half for IBM to replace, and a HD that took about a month.. I no-longer use IBM for HW.. That saying "No one ever got fired for buying IBM" should be retireds.. Alot of IBM has gone to the crapper.
      • The New Zealand police department had huge problems [gplegislation.co.nz] with a system [gplegislation.co.nz] purchased from IBM.

    • Re:Just a thought (Score:3, Interesting)

      by zenyu ( 248067 )
      Alright, so... what am I missing here? You've got IBM behind your efforts. Whats the problem?

      He's probably comparing IBM service with SGI service. IBM will support your PC as well or better than Dell or Compaq, but SGI will send a guy in a cab with extra workstations if you have a problem. They charge for that type of service when you buy one of their PC's, but when they lend you an Origin on short notice you appreciate it.

      SGI will gladly sell you Maya for your Linux box, but it's up to you to set up the scanner, find the right 1000Mbps network card, compile a custom kernel, pick the filesystem, etc.
    • The guy was just talking out his ass. I mean, how many times has your boss said things that were completely wrong that he obviously hadn't put much thought into, but it sounds good and he has to act like he has an opinion and knows what he's talking about. That's all there is to it.

      Now the big question is, what then do you say to your boss if you can't tell him he's wrong, make him look/feel stupid and then have him hate you?

  • by gripdamage ( 529664 ) on Sunday July 07, 2002 @07:34PM (#3838497)
    Interesting bits include managing an off-site database of 45TBs, Linux workstations from IBM, 1400 processors, and the animation methods to be used on Gollum. It's a good thing. :)

    A precious thing, one might say...
    • Those storage arrays are really fun to play with. If you want to see what they are using for storage arrays check out LSI Storage Systems [lsilogicstorage.com]. StorageTek might sell them, but the good people over at LSI Storage Systems make the things. They are all fiberchannel harddrives, normally around 50GB each. Depending on what is needed by the customer we have 10k and 15kRPM drives that can go in. Everything is hotswappable, and I mean everything. They have some really good transfer rates (I want to say around 850MB/sec when benchmarked). And are pretty with all the flashing lights on them.

      But don't think about getting any for home, the controllers alone cost as much as a family sedan.
  • For modelling and rendering? Is it the standard Maya + Renderman combo? Or something proprietary?
  • There's more than enough incongruities with the "article", but that's not what bugged me the most. That whole site was just a big ad. Ads were upon ads. And you then have that "News" block at the bottom. I started to think it was a porno ad or "viatamin" ad. It had that "Buy Now" type of lettering.

    Point: Dont know if I quite trust these people for "news sources". Looks more like ad hell.
  • by malakai ( 136531 ) on Sunday July 07, 2002 @07:50PM (#3838545) Journal
    I'm amazed in this day in age, they are having a problem with asset management/tracking. Although it's underplayed in the interview, it seems as though the Informix Media 360 was a complete bust.

    I can't imagine it was beyond their programmers prowess to create plug-ins or custom scripts that could save the media to a server under some GUID of a filename, and insert a row into a table someplace with the meta-data for that asset. A homegrown content management system is really simple with todays scripting/filesystems/XML. Hell you could throw out the database insert, and just write a filename.xml in the same directory, then harvest the information later.

    I'm amazed they stumbled on this, and even more amazed they payed for the Informix product (didn't IBM buy them, and drop that product anyhow?).

    Also, is it just me or does it seem like this CTO was 'released' at an odd time?

    -malakai
    • by foobar104 ( 206452 ) on Sunday July 07, 2002 @10:22PM (#3839226) Journal
      A homegrown content management system is really simple with todays scripting/filesystems/XML.

      No offense, Malakai, but it's pretty clear that you've never worked on any kind of asset management system. It's a much harder problem than you give credit for. I write asset management systems for a living, so I've had a bit of experience here. A friend of mine, who now works with me, worked at ILM last year and this past spring; he was a compositor. I've talked to him for hours about ILM's asset management system. It's entirely home-grown. If anybody can do it right, you'd think ILM could. But my friend says that it's immensely frustrating in a lot of ways.

      The things that were brought up in the article about Media 360 are not new; these are the same problems that all asset management system have to deal with. The biggest one being, of course, that, from the perspectives of the artists, it's easier not to use the system than it is to use it.

      I'm amazed they stumbled on this, and even more amazed they payed for the Informix product (didn't IBM buy them, and drop that product anyhow?).

      Informix spun the Media 360 product off into its own company, called Ascential. I've heard some ugly rumors about the health of that venture, but I probably shouldn't say anything specific.
  • by AntiTuX ( 202333 ) on Sunday July 07, 2002 @09:24PM (#3838952) Homepage
    sorry, I just couldn't resist.
  • mmmmm..... (Score:1, Interesting)

    by prmths ( 325452 )
    i was really happy with the first movie...
    even though all of the bombadil saga was pulled out.. etc...

    but i guess they cant leave it all in since it'd take about a solid day to play the full thing...

    i cant wait to see the two towers... i havnt seen anything with treebeard... anyone heard anything?

    i'm also kind of curious on how much crunch power the fellowship took compared to the two towers...
  • Have we broken the 2GB [or whatever] file size limitation yet ? I wonder how can one realistically want to store huge files on a modern Linux filesystem. I'm not up-to-date with the latest advances in this area, does anyone have more info ?
    • Using ReiserFS 3.6 you can have files sized up to 1 exabyte which is somewhere over a quintillion bytes (1,152,921,504,606,846,976 bytes I do believe). However on 32-bit systems the size of a single file is limited to 17.6TB or so. Older versions of ReiserFS only support file sizes of about 4GB. Practically however the entire file system is limited to 17.6TB so it is doubtful you're going to be able to have a single exabyte sized file. A single 17.6TB file isn't too shabby however.
      • "1 exabyte ought to be enough for everybody"
        • -Graymalkin, 2002

        Joke apart, I routinely need files over 4 GB in size when doing video editing of DV files; 2 or 4 GB for maximum file size is a disaster for me :(

        Thx for the updated info on ReiserFS; that's interesting to know where we're going :)

  • You would think the studios have already learned their lesson from the Jar Jar and Scooby Doo atrocities. :sighs:
  • The problem with Linux is that it's an open source system, so if you are having issues or difficulties with its stability, it's like pushing on a rope; there's no single vendor to deal with. Like they wouldn't have stability problems with any other system, dealing "single vendor" does not equal stability.
  • When they more or less say that there is nobody to help them with their probs, I think they haven't seen the full potential of Free Software / OSS

    Clearly they are not the only digital animation shop on the planet. So others that switch to Linux will face the same problems. And I know of a few in London that do switch to Linux.

    So if they and all the others would give back what they fixed and developed, the investment would suddenly shrink and everybody would gain.

    But then most studios are afraid to disclose what they are doing and how. For the simple reason that technology is one of the key parts of creating a good digital animation. So if everybody has got the receipe, they'd loose their advantage in the competition against all the other hundreds of shops.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...