Re: [colorforth] Intellasys question for Jeff Fox
- Subject: Re: [colorforth] Intellasys question for Jeff Fox
- From: gwenhwyfaer@xxxxxxxxx
- Date: Mon, 26 May 2008 18:25:24 +0100
On 26/05/2008, John Drake <jmdrake_98@xxxxxxxxx> wrote:
>> You repeat this later, but I can't help thinking it's an evasion. Most
>> people, when they want to know precision, are thinking of a number of
>> binary digits. (And a word about accuracy wouldn't go amiss either.)
>> Since the F21 had a 21-bit word, would it be fair to assume that the
>> precision was 20-bit? What about accuracy?
>
> Nonsense.
That's just plain rude.
> Machine precision for a 386 is obviously up to 32 bits.
Especially since this is just plain wrong. Jeff cited a 386/387 combination, which implies the use of the 387's transcendental instructions; the 387's mantissa precision is 65 bits, but because it's floating point, its range is a lot larger. And maybe most of that precision is unnecessary - I won't argue that if you don't need any more than 20-bit precision, the x87 is a waste of silicon - but you get it for free, and if you need it it's already there.
> Machine precision for an F21 would be 20 bits. No "evasion".
And no apples-to-apples comparison either. That's where the evasiveness comes in; 20-bit CORDIC on an F21 might well be 50 times the speed of a 387, but what about one that matches the 387's precision? What about the 487, which was actually contemporaneous? What about an integer-only implementation for the 486?
The only way the figure is meaningful is as a comparison of two different styles of programming - the style of carefully rethinking everything from the ground up and only implementing what you need in precisely the way you need it, compared with the style of starting with as rich an environment as possible and hoping whoever implemented it didn't screw up. As comparisons of technology, they're almost meaningless without extensive commentary - a point I made in my previous email.
(It is possible that I'm actually in what has come to be termed "violent agreement" with Jeff, or even yourself, on this point. But the fact is that stating the 50:1 ratio as a comparison in itself, without making this clear, has been fuelling the likes of John Passaniti for the last decade; it's just too depressing to go to c.l.f and see Jeff and John still engaged in the same flamewar I found them fighting 15 years ago. It is transparently clear that the F21 and the 386, or the C18 and the Core 2, will not do anywhere close to the same things at all; it is also clear that if your application will fit inside a C18, a Core 2 will be a waste of money - but that if your application doesn't need the speed of a C18, a PIC may be a better bet; it may well be the case that if your application can use the C18's raw speed - but let's not forget that 1000 MIPS isn't anything special these days, and in any case that's a theoretical peak for a C18, not a much more useful sustained rate figure - a Core 2 would be quite inadequate for the task. Perhaps the only appropriate comparison for a C18, for those of us without fab access, would be 1/24 of whichever Xilinx or Altera FPGA is closest to the SEAforth-24A in unit cost.)
> For the record a recent (2 years ago) 16 bit implementation
> of CORDIC was posted on comp.lang.forth.
>
> http://www.complang.tuwien.ac.at/forth/programs/cordic.fs
That's great, but a 16-word lookup table is a significant chunk of a 64-word memory. Moreover, the great advantage of CORDIC is that it doesn't need multiplication - but if you have a single-cycle multiply and a decent amount of memory, you can use a lookup table and linear interpolation and reduce the cycle count dramatically.
> So if anyone REALLY cares about benchmarks (I don't)
That's nice for you, but people deciding which chip to select for their new application need some way of determining whether or not that application can even be executed by that chip. Only hobbyists can afford to start with the processor and build outwards; not even Chuck has done that - his processors are built to fit their intended applications.
>> What was the cheapest that single F21 chips were ever available?
>
> You're missing the point. Jeff isn't talking about OTS cost.
Then Jeff isn't talking about anything useful to anyone without their own fabrication facilities. (Do you have your own fab?) If I were looking for a chip to use in my design, I'd be looking at unit costs; a fab cost of "a couple of cents" is pointless to discuss when the unit cost is $40 (or even $4).
> He's talking about fab costs as in how much it would cost
> to fab X number of 386s to how much it would cost to fab
> X number of F21s.
The cost of fabbing any given wafer at any given resolution, as I understand it, is pretty much constant - and quite independent of what's on it. The smaller a chip, the more you can pack onto a wafer, and the more will survive the wafer's inevitable flaws. (Of course, you also need to multiply the cost of a wafer by the number of wafers you blow completely with non-functional designs... but that's not a fab cost per se.)
>> But where can I buy a C18 for a couple of cents? It's an unrealistic
>> figure; even if they were packaged singly, and the C18 did cost a
>> couple of cents to fabricate, the chip packaging alone would
>> completely dominate the cost.
>
> That's part of the reason why multiple C18 cores are put on
> the same chip.
...which as I mentioned previously, will *still* be dominated by packaging costs - that and the cost of custom fabbing, of course, and the need to return profit from comparatively low volume. But for anyone who's reliant on Intellasys to sell them C18s in whatever form, talking about anything other than the cost of a production chip in quantity is worthless; the only people for whom fab costs are useful are people with their own fabrication facilities - and even then, they will still need to insert those chips in carriers.
Regards
Gwenhwyfaer