ORIGINALLY PUBLISHED IN LIMA NEWSLETTER SEPTEMBER 1993 LETTER from AUSTRALIA - No. 5 Jun / 93 --------------------------------------- Somehow here at Funnelweb Farm we seem to have been under siege from some of the local wildlife. No, not the funnelwebs! Earlier in the year we returned after a weekend at Hawks Nest to find the fireplace mesh screen knocked over. A possum, not one of our regulars as it turned out, had come down the chimney while we were away and was still in the house. In the china cupboard in fact. This contained all the hand made wine glasses, glass ones from a local craftsman, and pottery goblets that Val had made at the city recreation department's old firehouse when we were in Boulder, CO in 78/79 for a year, and which had survived the trip back to Australia. Eventually we managed to extract it with loss of only a few items. When we finally coaxed and shoved the possum into an old potato sack it quit snarling, biting and scratching, and just went all limp. Played possum as the saying goes. Released it down the back and it went scampering off into the trees. Just the other week daughter Eileen visited home while we were away to find a kookaburra in the house. It had crashed through a pane of glass and could not find its way out again. It had been there for a day while everyone was away, behaving like a pigeon with a statue. A few years ago another crazy kooka took a fancy to dive-bombing the windows at Hawks Nest. They have very powerful beaks and can make quite a thump. I do not know how this one got started - maybe it thought it saw a reflection as a rival and liked the feeling of hitting the window - or maybe knocked itself into a psychotic state. You could be standing on the veranda with Krazy Kooka up in one of the trees, and it would come zooming in, fly a tight semi-loop around you, and then crash into the glass. The neighbours were complaining about the constant thump thump which went on for weeks. We ended up with flattened cardboard packing boxes and chicken-wire nailed up over the windows to discourage it. Come to think of it, I haven't heard any whipbirds for some while. Better not have been the bloody cats killing them off. Only about a meter away from the TI I have a 486 PC. There is something about it that reminds me of the TI-99 experience. No, it is not the machine itself but the fact that it is running IBM's OS/2. It is an excellent operating system, in fact the first decent one on that powerful but disgustingly ugly Intel platform. So there it is, a fine piece of work put out by a very large company which has consistently stuffed up its product development and marketing, and is beset by a competitor with grossly inferior product which gets all the support and magazine hype, no matter how bad or undelivered its products are. The wheel seems to have turned full circle. Do I really want to get into all this again ? I now finally have Internet access from my office computer, another 486, and have been looking at the posts on comp.sys.ti which is the only TI stuff I can find. A recent common theme has been PE-box and card dissipation. I will not rehash that here, but will reflect instead on how electronic equipment cooling should be done. The thing that strikes me is how badly air flow in typical PCs is handled. If you have ever seen an old Tektronix oscilloscope you will appreciate how it should be done. A large cooling fan on the back draws air in through a filter, and this clean air then blows through the equipment and finally out the ventilation holes. It needs occasional maintenance by filter cleaning of course, but deposits the minimum of dust and crud inside the box. The TI PE-box shows a lot of its industrial heritage in its air distribution system, the only thing lacking is filtering of the air as it is drawn in. The typical PC seems to be very haphazard in this regard, even some from big companies that should know better. A couple of weekends ago William and I visited Ben Takach in Sydney. Ben is a long-time TI-99 and CC-40 stalwart, but he was fixing a genuine IBM PS/2 at the time for a relative. The ventilation in this was so designed (if the word can be used here), that if a disk were in the floppy drive the cooling air was drawn in through the opening of the disk drive and finally exhausted out the back. So this "design" drew dust-laden air directly from outside through the most mechanically delicate item in the box. You guessed it, Ben had just had no alternative but to replace the floppy drive at great expense. Progress never seems to be a steady forward trend, and often seems to involve steps backwards as well as forwards. The small computer graphics and film outfit in Sydney where Will had been working is contemplating a move to Silicon Graphics machines. Up to now they have been working on a network of PCs, hardly ideal for serious graphics work. So Will wrote some test programs for comparison purposes and sent them over to SGI's local office. The first of these was an image analysis program that involved lots of floating point calculations. As expected this ran very much faster than on a 486 DX/50 though I am not so sure just how it ranked as speed for money. PCs are ugly but give a lot of raw power for the buck, while SGI machines though elegant, are very expensive indeed. The second, about 2 pages of ANSI C code, was an image cross-fade program using 32-bit integer pixel values. They did not hear back for over a week from SGI, and then it was a low key admission that the program in fact ran faster on a DX/50 PC than it did on an R-3000 based SGI machine, and more embarrassing still was even slower again on a SGI R-4000 machine. They had been trying for the week to optimize it, and would not admit to just how much slower it actually ran. The reason turned out eventually to be second level cache thrashing in the SGI machines, and they ran much faster on smaller data sets. What is more the the actual comparison found when they had the SGI machine in for evaluation showed the R-4000 only half as fast as the DX/50 on this integer task. That is not what you buy a bigger and more expensive machine for, and shows that even the most gold-plated engineering does not always get it right. There does seem to have been some kerfluffle over various CPU memory expansion schemes, either available or proposed, in the public electronic media, as visible here on comp.sys.ti. As is usual smouldering discussions seem to generate more heat than light. So why not a few outsider's comments in review for this letter, which hopefully will add a little light. There have been minor league plug and play CPU memory expansions around for the TI-99/4a for years now. The "supercart" was the most common of these, and extended versions of this which bank more memory are around but not common. Their rarity, limited 32K typical total size, and most of all incompatibility with Extended Basic, have precluded any serious software development. Off to one side we have the RAMdisks. The Horizon family banks RAM in the DSR area, but only in small 2K segments at one address. The banks are small, and the CRU structure is messy. All in all I think they are best used as RAMdisks emulating physical disks as closely as possible and as fast as possible. Auto-booting has been the subject of earlier letters. My locally designed and made Quest RD is much cleaner in CRU assignment but is a rarity in general terms. The RAMBO modification for big HRDs is in principle an advance with hardware changes for 8K blocks mapped to the cartridge ROM/RAM space and DSR software shielding the user from the uglier CRU details. I just think it is not a real substitute for a full-bore memory expansion design. We have never been able to do anything with it here because our HRD-3000 has only ever fully worked for a week and currently sits on the shelf as unusable. As a consequence I am quite turned off RAMBO as an idea to follow. The Myarc 512Kb RD was never designed as a general purpose device, as it banks all 32K expansion RAM space at once. It is a difficult device for other possible memory expansions to live with because its basic 32 Kb is out there in the PE box and cannot be turned off. Other early third party devices are just too rare to be of interest - I have seen only one Foundation card, and no Morningstar ever seems to have crossed the Pacific. RAVE have been mentioned as another source, but have never been sighted over here. One item on comp.sys.ti did seem to give the impression that TI's only RAM banking scheme was that developed for the 99/8. Well, never having seen a 99/8 I don't know what the scheme was, but clearly it would NOT have been strictly relevant to the 99/4a, even if a 99/8 could interface to the the P/E box and cards in it. In fact TI did have RAM expansion plans for the 99/4a in a 128Kb card and further plans for an even bigger card. We have one of these rarities working (using a TI pilot batch PC board and PAL chip courtesy of Richard Fleetwood) and I gather several other people have also. It was presumably designed with further extensions to Extended Basic and E/A in mind, and left low-mem alone while switching in 32Kb banks for high-mem and the DSR space. It still looks to me like a very workable model for memory expansion on the 99/4a, with CRU assignments needing redoing for arbitrary size and made readable as well as writable, and a standard mapping routine established. Of course 10 years later it would be surprising if better expansion implementations were not feasible. The arguments so hotly pursued, with no real technical detail apparent to this outsider, on merits of different memory banking schemes seem to miss the real point that the actual details of banking of data areas in any reasonably designed system are a minor factor compared to the decision overhead on whether to flip banks or not when data sets are bigger than the bank size. This is already a problem in using the 9938 VDP but is eased there by auto-incrementing of bank addresses on read or write of byte data over bank boundaries, which saves having to do comparison checks on VDP address for every byte transferred, or other prior calculations where possible. Then the mapping routine has to be called only for new addresses starting a run, and I have always found it sufficient to map from a virtual 64K buffer, which is in scale for a 16-bit CPU. As an aside, from Will's recent experience on writing production level image processing code on PCs, the greatest single cause of problems is segment handling, and Borland 16-bit compilers are very much out of favor worldwide amongst heavy duty users for this reason. There does not seem to be that much between different ways of controlling memory banks. The 9900 does have the CRU structure as a very efficient way of controlling on/off lines, and in the 99/4a DSR structure this makes for a robust system. TI set their 128 Kb card at fixed DSR CRU base, which needs everyone to agree (none of us having TI's muscle power here). A more flexible system would be a card settable to any convenient CRU base with a minimal DSR for location and identification purposes, and read/write of bank assignment via CRU bits. Then a driver routine, perhaps downloadable from the DSR, could provide all the flexibility required. Outside DSR mapping, it is still possible to use the CRU. Edgar Dohmann did this some years ago in a banked 32 Kb RAM cartridge for DataBiotics using CRU >800 as base address. Now this is where some general agreement on CRU assignment would be necessary. The general alternative method is to use a memory mapped control register. The 99/4a already does a lot of this for various purposes, though TI were very chintzy in the decoding. Presumably expansion designers can find a way to slot another device address in the gaps in >8000 to >A000 that currently go to waste. What is not good is to have memory mapped addresses intruding into previously general purpose memory areas. This is just a recipe for incompatibility and in this case prior software writers cannot be faulted for ignoring system guidelines. Even on the margins it can be a bother. For instance the HV99 Quest banked 32 Kb RAM cartridge sitting in this very machine banks with writes to the top addresses, and even this is nuisance enough to make it special purpose only. When it comes to code segments the programmer should be in control as deeply as desired and not totally insulated, otherwise inefficient code will result even if it is easy to write. Let's face it, charming and elegant as the TMS-9900 processor may be, it is very underpowered by current standards and absolute attention to detail and good design sense is necessary to get acceptable results. I get very uneasy when I see MS-DOS and Microsoft practices being taken as a model to apply to the TI. Even more so I get uneasy when I see Computer Science academic approval being given to an approach. Not that CS faculty don't have a whole lot of good advice to give, but efficient use of limited physical resources has rarely been a priority for them. Real users of computers do care deeply about efficiency, no matter how powerful their platform. See the book "Numerical Recipes in C, 2nd Ed" for the viewpoint of physicists and engineers as users. It is clear that the CPU side of the 99/4a is already overwhelmed by the 9938 VDP in 80-col systems, and that more CPU memory would bring the system into better overall balance. It remains to be seen whether CPU performance is then in balance with the rest of the enlarged system. The Geneve is very much better from this point of view. No details let alone any hardware or software for the newer entries in the memory stakes have yet come to Newcastle. As for Funnelweb developments to use more memory - well basically I support the hardware we have available here, after making my own judgments on what is worth supporting and how heavily. As example 80-column expansions have been strongly and consistently supported, though only rather late in the piece and then courtesy of Dijit and the AVPC. If this system did not have a 9938 it would have been banished to the closet years ago, and all Funnelweb development halted. In fact nowadays 9938 related developments lead, and ideas generated carried back where possible to the standard system. The Myarc HFDC on the other hand we regarded as too flawed to go overboard on (and an example where some competition from realistic alternatives might have worked wonders). If we do not have the hardware the support will of necessity be limited or nonexistent. Commercial sources often have an attitude to fairware developers that is ambivalent at best. The decision for them is whether to regard fairware writers as a only another customer or as a resource. We respond in kind, while we still have any interest. That is enough for now or Charlie will never get to receive this. Tony McGovern Funnelweb Farm Jun / 14 / 93 .PL 1