|
11/23 |
2001/2/6 [Computer/SW/OS/Linux, Computer/SW/OS/OsX] UID:20508 Activity:nil |
2/5 Someone below posted that Linus hates firewire. Is this person serious? Does anyone here know anything bad about firewire? (everything i have heard has been very positive). What? (added for those "yes"/"no" S.A.s.) \_ Well there was that $1 fee apple wanted and then quietly dropped. Some people complain that the developer docs and the spec are hard to obtain (okay, its a IEEE standard, but I've still heard complaints on line). And then there's the USB 2.0 crowd that feels that Firewire is a pansy mac thing. Here are some articles: http://lwn.net/1999/features/LinuxWorld/lt-keynote.phtml http://slashdot.org/articles/99/09/05/1325210.shtml BTW, I'm all for firewire. It kicks major arse. If poor little Linus and his UNIX clone LinSUX don't want to get in on this technology, he can stick to his crappy peecee. I don't run his OS nor will I in the near future. \_ Linus made disparaging comments about Firewire in the past, but the Linux 2.4 kernel supposedly supports Firewire: http://www.firewireworld.com/news/2000/08/20000823/linux.shtml |
11/23 |
|
lwn.net/1999/features/LinuxWorld/lt-keynote.phtml -> lwn.net/1999/features/LinuxWorld/lt-keynote.php3 Contact us LinuxWorld coverage - Linus Torvalds keynote August 10, San Jose Linus was introduced by Larry Augustin, head of VA Linux Systems. Larry started by asking how many people in the audience had contributed something that appeared on a Linux distribution CD somewhere. How many had contributed something that appeared on a Windows CD? That, said Augustin, is the strength of Linux - the contributions from all of those people It is that contribution that makes Linux different. It looked rather much like groupies rushing the stage at a rock concert. People cheered and threatened to head into a standing ovation; Linus, looking a little embarrassed, asked the audience to stay calm. Linus didn't, he said, want to do a normal, boring talk. But the LinuxWorld organizers were absolutely adamant that he show up. There was a brief "warm, fuzzy gratitude section" to Linux users as a whole for being such a great community. With this release, the handoff to Alan Cox and company is complete, which is a major milestone. About all Linus did was to "sprinkle holy penguin pee" on it. Linus thinks this is great, now he can ignore the stable kernel and go do the fun work. There are new drivers for disk arrays and Mylex and Compaq controllers, and a number of updates to older drivers. Of importance to a number of users will be that the ISDN code has finally been resynchronized. The infrastructure for symmetric multiprocessing, for example, has now really been put into service. There is I2O support, and a lot of improvements to resource allocation, needed for PNP, PCMCIA, and hotplug PCI stuff. Finally, says Linus, they really had wanted to include a "mauve screen of death" feature, but they ran into problems with patents held by Microsoft, so they had to drop that one too. Then it was time to move onto the question and answer session. A: Much work is being done in support of handheld devices, much by the companies that are getting into Linux in the embedded arena. Linus also wants to see the PCMCIA code more tightly integrated into the kernel. As to Firewire, it is not very well documented or used, and could present unpleasant licensing issues. Q: Code quality is clearly very important to you, but there is a lot to keep track of. Where do you draw the line in what you monitor personally? A: He starts by having an awful lot of testers (the users). Linus really cares about the core system, process management, interrupts, memory management. With things like filesystems, he often can't even test them, and they are not his area of expertise of interest. For filesystems and drivers, he really has to trust the expertise of the maintainers. Code he deals with directly has a fairly straightforward style, but areas maintained by others should use the style they prefer - as long as he doesn't have to look at it too deeply. A: USB has been put forward as one great thing, but that really is not true. USB gives you an electrical standard, but says very little about the protocol used, and that is where the troubles can come in. Usually adding support for individual devices will be relatively easy, now that the infrastructure is in place. Regarding APM, what really wants to do is to support ACPI instead, which is a more direct approach. APM is meant to work on larger time scales, where things happen on the order of seconds. With ACPI, you can power things down "between clock ticks," giving much finer control. He currently has code which works on exactly one laptop in the world, which is a start. He doesn't like the naming scheme used by devfs, it looks like a flashback loved by old-style Unix hackers that leaves all others clueless. Q: When will the kernel be scalable beyond 4 processors? A: Linux is currently scalable from four to eight processors; Linus expressed confidence that the "friends in Redmond" will try hard to find problems, and will thus serve as the beta test site. But they will have to work much harder at it this time around. A: Capabilities have been present in the kernel for a while, but are still kind of simplistic. The problem is that it is hard to find a good interface for controlling capabilities LWN has covered the capabilities discussion extensively in the kernel section . Security-conscious sites can rule out many operations once the boot process is complete. The rest of the capability interface will have to be thought out more, a process which will be aided now that the infrastructure is in place. The kernel has no NUMA support now, and adding it will be a big project. A: Yes, but it will still be able to run 32-bit binaries, just like the UltraSparc port can now. A: (Winces) There are many problems with DVD support, and technology is not one of them. There are a lot of trade secrets around how DVD information is decrypted. There is interest in working this out, but there seem to be only two options. Either you need a hardware unit which does DVD decoding, or it will be necessary to buy a commercial DVD player. People who really need real time can keep patches up to date much more easily, but Linus does not want to encourage the use of hard real time except in the rare cases where it is really necessary. Q: There have been concerns about the staleness of the ISDN code. A: Linus is now happy about the ISDN code, the issues there have been resolved. The ISDN maintainers are now aware that they need to help to keep things in sync. Updating ISDN at this point is not a feature freeze issue, just an update of already existing features. Asynchronous I/O can be done much more cleanly with threads. Q: How far away is software RAID and LVM from being stable? Q: What are the plans for support of hot-swap technologies, such as CompactPCI? A: Hot-swap support came out cleanly due to the thinking that went in regarding ISA PNP and PCMCIA. They all have the same needs, which have to do with careful resource allocation. He thinks they make things too easy, that the result in people fixing the symptoms and missing the underlying reasons for problems. You can still get into "bizarre cases," but most of the problems are fixed. Many of them came down to an off-by-one error that made the kernel not realize that it was running out of memory. Q: More nontechnical people are getting involved with Linux. Are they members of the Linux community, and how can they help? A: Linus likes kernel development, but he does not see non-kernel developers as lowlifes. He has been saying for years that most of the exciting work is outside of the kernel, and is perhaps not even programming. He put out Linux so that people could find uses for the system and problems that he does not have, so that it could be improved. One issue has been the ISA PNP stuff, sound cards are one of the areas where it is truly a big deal. Q: How do you feel about internet services, such at HTTP or SMB, being implemented in kernel space? The kernel HTTP server is very clean, he likes the code. It may be something you enable only to win performance benchmarks, or as a teaching aid. Regarding SMB, he does not think that the Samba people would want to do it in the kernel. The problem is just far too complex to be solved in kernel space. Q: Do you fear a class between commercial interests and open source? Companies have their own agendas, and are happy to support development people to get what they need. Developers have been entirely happy to take piles of money. Schedules and product deadlines ("the mother's day release") have not been a problem. Once the question period ended, Linus commented that he had been asked to read an introduction for the IDG guy (whose name I forgot) who would present the "community award" to the Free Software Foundation. It was a long-winded, excessive introduction, and Linus clearly hated having to plow through it. He read it directly off the paper, and managed to make it into a subject of some ridicule. It was clearly written by the same person who wrote the intro. Linus did well at keeping a straight face through the whole thing, but is was a painful thing to sit through. Richard Stallman came up to the stage to claim the award, in front of the smiling lady holding the six-foo... |
slashdot.org/articles/99/09/05/1325210.shtml Newsletters - 10 TechJobs - 11 Slashdot Broadband 12 Search 13 X 14 Welcome to Slashdot 15 Security 16 Windows 17 Programming 18 Television 19 Debian 20 Login 21 Why Login? Sections 23 Main 24 Apache 25 Apple 26 Askslashdot 27 Books 28 BSD 1 more 29 Developers 1 more 30 Games 11 more 31 Interviews 32 Science 2 more 33 YRO Help 34 FAQ 35 Bugs Stories 36 Old Stories 37 Old Polls 38 Topics 39 Hall of Fame 40 Submit Story 41 About 42 Supporters 43 Code 44 Awards Services 45 Broadband 46 Online Books 47 PriceGrabber 48 Product News 49 Tech Jobs Is firewire dying? Intel is not (and will not) supports the FireWire on their core logic chipset. Change The Fine Print: The following comments are owned by whoever posted them. Somehow, I don't read this as a declaration that firewire is dead, or even sick. Apple must die because they are greedy/closed/stupid: So therefore, Firewire must die. I also doubt many people here know how much money OEMs make on a particular hardware component, yet in this case they repeat what Intel said. People not reading the actual article: Seems about half the time people here just post knee-jerk reactions to the headlines without even reading the original article. You bitch about MS doing it, but in this case it must be okay because Apple owns the patent on IE1394. I guess if Intel released a press release saying Apple was dead and it was quoted in the PC press, that must mean it's true. Apple seemed to fix the price of Firewire well out of the range of rational thought, which drove camcorders through the roof. Here is one repeated at a starting score of 1 (although that might change). Why should it suprise you that they dropped the price months ago? It doesn't seem to suprise you that everyone else lowers their prices. I was using a Mouse, Keyboard, and Camera before it's release. Most vendors said they had plans for USB products, but they hadn't been released yet. I hate to do this, but I have to give them credit for this. Its a matter of supply and demand, and as demand grows the price will drop. More importantly, USB drives are just as expensive, and you have a max transfer rate of 12 megabits instead of 400 (soon to be 800 and eventualy 1600). Now we have to add a physical interface chip to drive the signal (about $5 from lucent in a 64 pin package) and enough smarts to interface from a microcontroller to the device on the other end. Now add all of those devices and try and put them in a small package like a drive. Combined with the normal drive controller hardware for a high performance unit and you get a large package without some fancy board layout. So we have moderate to high recurring costs and high non-recurring costs. Plus a cramped layout to fit the consumer demand for size. In a few years I'd expect to see a single chip solution like USB, but bigger and around $2-$5 without the connector. At least with USB you can have one 'dumb' device like a keyboard or mouse that only takes commands to itself over USB, runs them through the mechanism, and gives back a state value or binary from the eeprom. If you have mutual data transfer going on between two devices, they both have to be smart. For instance, I hook a USB-enabled hard disk up to my computer, and all it does is look at the driver and send seek messages, read/write messages, etc to it. With firewire, the device has to be smart enough how to load and save files from itself. A Firewire DVD player, for instance, has to send _encrypted_ video over the wire, and the connecting host has to have the logic to decrypt the data. This is a good example of devices getting very complicated and how not having a device in control limits forward-compatibility (supporting devices you haven't seen yet) Firewire is cool (although it looks to be little more than a network with device-powering built in). USB is cool too, it was designed with manufacture of cheap devices in mind. The implementation of USB is much cheaper right now, both in devices and on the motherboard, so Firewire's main advantage is speed and disadvantage is price. While I see Apple doing much to market its advantage, they are also doing things like charging for usage, which makes the price differential even higher. Unfortunately it is hard to compare a USB device against a Firewire device since they are completely different markets, but you can add a USB-enabled microcontroller to your hardware to make it USB-compatible for under a dollar. It was Apple's single, self-powered, self-configuring, keyboard and mouse interface for the past 12 years. RS422 and "the stupid d-sub vga connector" have not been on a Mac since the introduction of the iMac and Blue and White G3's over a year ago. While the consensus amongst those who need to do more research is that 'Firewire is dead', those of us in the Mac community are happily plugging away with the technology. All Macs with exception to the iMac and iBook have Firewire standard (replacing the otherwise pretty fast SCSI). Slightly more expensive, but the graphics professionals andsuch who rely on it could care less. I was talking only about the hardware implementation and design costs. USB reqires a computer To use USB, there must be a computer (or at least an intelligent hub) in order to allow transmisions; FireWire makes no such demand: you can easily connect two cameras directly, with no hub whatsoever. Yes, but you cannot connect any firewire device to any other- they are universally compatible. For instance, you can't plug a Firewire DVD into your camcorder to tape your favorite movie while in blockbuster (hurry! FireWire can say to a device, "You need a constant 200 Mbps data flow? There is a guaranteed rate, but it is computer-controlled, not device controlled. If the operating system cheats you, you can't blame the standard right? But on the other hand, the standard is a bit weak on how a new device is supposed to convince the OS to negotiate more space than the leftovers it has. I haven't read the IEEE specs, so I don't know if they account for renegotiation. Seems to be going strong there with more and more products being introduced and announced as we speak. Remember USB was pretty much going no where also until Apple brought it out in the iMac. I was at one time excited by USB, until I discovered how limiting it is in cable length. Firewire could have become an adjunct to 100Mb Ethernet. Perhaps the shenanigans of Apple with respect to royalties have brought us to this point. I suppose now we must hope for gigabit Ethernet to become cost-effective and pervasive. Perhaps then we can consign the wimpy USB to the dusty closets of history. Only Windows 98 has passable USB support, and it's not even in the same ballpark as what Apple has done in Mac OS. It certainly doesn't give manufacturers the feeling that they can ship devices out the door and not worry too much about tons of support calls. It was going to immediately kill the old-style serial and parallel ports and lead to a new age of plug-and-play computing. Last year, for publicity, Intel plugged 111 devices into a PC to set a world record, but many of them didn't work. The next day, Apple employees plugged 127 working devices into a Mac while drunk. I'm not mentioning this as some sort of Mac vs PC thing. Wintel + IBM, Compaq, Gateway, Dell, et al is one great big company with competing divisions and it doesn't move quickly. At this time, the eleven companies have decided to license essential patents. For the convenience of the potential licensees, the portfolio will contain essential patents which cover those portion of the Specifications (IEEE 1394-1995, IEEE P1394a and IEC 61883 Part 1) required for their products to be compliant with the Specifications. In addition, these companies intend to include the IEEE P1394b specification, which has not yet been adopted, in the license. Other patent holders are encouraged to participate in the joint licensing program. Interested companies should submit a letter stating their interest and listing the patents that they own and believe to be essential. That's what Intel would have you believe, since they don't own it like USB (which Intel DID NOT invent, ... |
www.firewireworld.com/news/2000/08/20000823/linux.shtml Welcome to firewireworld. Search firewireworld, Find firewireworld, Online firewireworld Tools References 1. |