# What is considered an "advanced" topic in Python?

73 messages
1234
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 On 01/06/2015 16:07, Laura Creighton wrote: > In a message of Mon, 01 Jun 2015 14:57:02 +0300, Marko Rauhamaa writes: >> In 1951, decimal numbers would have done little good in the UK with the >> pound divided into 20 shillings and the shilling into 12 pence. Maybe a >> "Babylonian" module would have been perfect. >> >> >> Marko > > You are being facetious, but in point of fact, naive Brits who knew nothing > of neither accounting systems nor floating point for the most part got > things right when they bought their Sinclair computer in the early > 1980s. > > This is because their natural tendancy was to calculate all the pounds > separately, and then the shillings, separately, and then the pence. Strangely I don't remember that. Although we did have to add up sums of money in three columns: pounds, shillings and pence, and carried as necessary from right to left. > (With Guineas and other odd stuff thrown in, when the needed them). > This meant that they kept 3+ legers at one time, and then, when they > were done calculating as one final step converted what they had into > its representation where you never had more than 100 pence or 12 shillings. You mean 12 pence or 20 shillings? > Thus, entirely by accident, they did their accounting in integers, not > decimals at all. > And this is, of course, the first thing that people who write real > systems that add money learn -- convert everything to pennies (or > whatever you call them) and do all your calculations in pennies, and > then as the final step express that in dollars and cents, or euros > and cents, or what have you. When decimal currency came in, we had decimals *and* fractions! Because of the 1/2p coin. So this didn't quite work. But it can work now, so we have no need of either binary or decimal floating point. (Well, until you have to work in dual currencies which can mean that an amount of money that is exact in one currency, can only be approximate in the other, when both are expressed in the smallest currency unit - cent, penny or whatever.) > The Brits still got in trouble when they needed to calculate things for their > 4.2 per cent morgage, or decided to keep a running total of the sales > tax they were paying, but they at least did not grab the floating > number representation as the first thing off the shelf when they needed > money. At one time the choice was integer or floating point in many languages, unless you were specifically using a business language such as Cobol. I think the Sinclair computer barely had integer types so the choice was even narrower. -- Bartc
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by alister In a message of Mon, 01 Jun 2015 15:26:49 -0000, alister writes: >I don't think anyone programmed a Sinclair computer to use pre-decimal >currency, we converted to decimal in 1971 (although the last pre-decimal >coin did not go out of use untill 1993) Interesting.  Somebody sent me one that supposedly was written for then a few years ago.  Maybe it was doing 'old style accounting' for fun?  I will have to go ask that person about it.  Thank you for letting me know. Laura
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by BartC-3 On Tue, Jun 2, 2015 at 1:17 AM, BartC wrote: > On 01/06/2015 14:52, Chris Angelico wrote: >> It's >> >> like the eternal debate about assignment and whether "x = x + 1" is >> nonsense, with advocates preferring "x := x + 1" as being somehow >> fundamentally different. It isn't. It's just a notational change, and >> not even a huge one. (Though I do see the line of argument that it >> should be "x <- x + 1" or something else that looks like an arro'w.) > > > 'x <- x + 1' already means something as an expression (whether x is less > than (-x+1). 'x <= x + 1' has the same problem. > > But I have used "=>" before,  for left-to-right assignment. (Mostly I use > ":=") In Python it does, yes; I'm talking about the language design advocates. Some recommend a two-character ASCII notation like "<-" or "<=", others prefer a single-character symbol eg "?" or "?", but whatever it is, it will have no meaning in that language other than assignment. And yes, I can see the value of using an arrow to indicate assignment... but I don't really see a huge problem with using "=" to mean assignment, given that people from a mathematical background will have to grok the entire concept of temporal truth anyway. Whatever symbol you use, it has to be explained. ChrisA
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by BartC-3 On 2015-06-01, BartC wrote: > At one time the choice was integer or floating point in many languages, > unless you were specifically using a business language such as Cobol. My recollection in the early days of home computers is that many BASIC implementations had BCD floating point instead of binary.  Back then most CPUs had instructions specifcally for dealing with BCD represented with 4-bits per digit.  Dunno if they still do, I can't even remember the last time I did calculations in BCD. > I think the Sinclair computer barely had integer types so the choice > was even narrower. -- Grant Edwards               grant.b.edwards        Yow! I'm totally DESPONDENT                                   at               over the LIBYAN situation                               gmail.com            and the price of CHICKEN                                                    ...
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by Laura Creighton-2 On 2015-06-01 16:51, Laura Creighton wrote: > In a message of Mon, 01 Jun 2015 15:26:49 -0000, alister writes: >> I don't think anyone programmed a Sinclair computer to use >> pre-decimal currency, we converted to decimal in 1971 (although the >> last pre-decimal coin did not go out of use untill 1993) > > Interesting.  Somebody sent me one that supposedly was written for > then a few years ago.  Maybe it was doing 'old style accounting' for > fun?  I will have to go ask that person about it.  Thank you for > letting me know. > 1971 was also the year we started the switch over to the metric system. It was going to be done gradually over a 10-year period. It's still a work in progress...
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by Laura Creighton-2 MRAB : > 1971 was also the year we started the switch over to the metric > system. It was going to be done gradually over a 10-year period. It's > still a work in progress... Same here in Finland. Minutes, hours and degrees persist as do calories, teaspoons and carats. Marko
Open this post in threaded view
|

## Zero [was Re: What is considered an "advanced" topic in Python?]

 In reply to this post by Skip Montanaro On 01/06/2015 14:14, Skip Montanaro wrote: > > Maybe you should just install > a decent spam filter or switch to Gmail, > which has a functioning spam filter (unlike Yahoo...) > Okay I'll bite, what's wrong with the Yahoo spam filter? -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by Grant Edwards-7 On 01/06/2015 18:02, Grant Edwards wrote: > On 2015-06-01, BartC wrote: > >> At one time the choice was integer or floating point in many languages, >> unless you were specifically using a business language such as Cobol. > > My recollection in the early days of home computers is that many BASIC > implementations had BCD floating point instead of binary.  Back then > most CPUs had instructions specifcally for dealing with BCD > represented with 4-bits per digit.  Dunno if they still do, I can't > even remember the last time I did calculations in BCD. > >> I think the Sinclair computer barely had integer types so the choice >> was even narrower. > Didn't Turbo C have compiler options to allow either BCD or fp? -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by Grant Edwards-7 On 2015-06-01, Mark Lawrence wrote: > On 01/06/2015 18:02, Grant Edwards wrote: >> On 2015-06-01, BartC wrote: >> >>> At one time the choice was integer or floating point in many languages, >>> unless you were specifically using a business language such as Cobol. >> >> My recollection in the early days of home computers is that many BASIC >> implementations had BCD floating point instead of binary.  Back then >> most CPUs had instructions specifcally for dealing with BCD >> represented with 4-bits per digit.  Dunno if they still do, I can't >> even remember the last time I did calculations in BCD. >> >>> I think the Sinclair computer barely had integer types so the choice >>> was even narrower. > > Didn't Turbo C have compiler options to allow either BCD or fp? Probably.  BCD support was pretty widespread in the Pascal compilers I remember for DOS and CP/M.  Binary floating point didn't get very popular until HW support for it became more common. -- Grant Edwards               grant.b.edwards        Yow! Now we can become                                   at               alcoholics!                               gmail.com
Open this post in threaded view
|

## Zero [was Re: What is considered an "advanced" topic in Python?]

 In reply to this post by Skip Montanaro On 6/1/2015 9:14 AM, Skip Montanaro wrote: > remove. Maybe you should just install > a decent spam filter or switch to Gmail, > which has a functioning spam filter (unlike Yahoo...) For me, Yahoo's spam filter is comparable to Gmail's.  (My udel account is actually handled by gmail.)  Either spam has decreased recently or both have gotten better at putting fewer things in the junk box for checking and tossing away more things. -- Terry Jan Reedy
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by gene heskett-4 On Monday 01 June 2015 20:33:50 Dennis Lee Bieber wrote: > On Mon, 1 Jun 2015 09:49:35 -0400, Gene Heskett > > declaimed the following: > >But IMO, any language that does not have the ability to set an fp > > number to a fixed number of digits to the right of the separator > > regardless of which , or . is used, needs one written. > > That removes all modern hardware units using IEEE floating point > standard, and pretty much all languages since (including) the first > version of FORTRAN. > > Floating point numbers are x-significant digits (commonly x:7 for > single precision and x:15 for double precision) with an exponent. > > Ada supports float and fixed point, but fixed point most easily > visualized as an integer with a scaling factor. > > >The ability to guarantee that the output of a FIX(2) is zero to at > > least 17 significant digits so that a zero coomparison is not > > non-zero because theres a 1 15 digits out in a 2 digit money format, > > is an absolute requirement. > > Use COBOL then... One used to have to go out of their way to get a > "floating point" data type in COBOL... The common numeric type is > packed BCD. > > Even M\$ "money" datatype uses four decimal places even if only two > are displayed to the user -- it allows for accumulation of fractions > of a cent over time. > That is a far more restrictive interpretation than I had in mind.  What, in the case of g-code should be the result of looking at a double and seeing that rounding errors in incrementing a number origially set to zero adding 1.000 to it 21 times, have created say 21.00000001200873000. Then we do a while [number gt 0.000000] number=number - 1.0000000 do stuff using that number endwhile But the loop then iterates an extra pass, because the 1200873000 is still there.  The numbers are all double's, but weren't initialized to a sufficient number of digits to the right of the . or , This is of course our own fault, caused by sloppy coding.  But specifying the value to 15 magnitudes more than the machine is capable of without spending weeks writing a screw compensation file to get that level of accuracy is counterproductive.  Its severe overkill IMO.  Any language ought to just throw away those rounding errors by filling the extra precision with NNNNN.nnnn000000000000000's. even if the original initialized value was only stated as 1.00000. > -- > Wulfraed                 Dennis Lee Bieber         AF6VN >     wlfraed at ix.netcom.com    HTTP://wlfraed.home.netcom.com/ Cheers, Gene Heskett -- "There are four boxes to be used in defense of liberty:  soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) Genes Web page
Open this post in threaded view
|

## What is considered an "advanced" topic in Python?

 In reply to this post by BartC-3 On Monday, June 1, 2015 at 9:40:39 PM UTC+5:30, Chris Angelico wrote: > On Tue, Jun 2, 2015 at 1:17 AM, BartC wrote: > > On 01/06/2015 14:52, Chris Angelico wrote: > >> It's > >> > >> like the eternal debate about assignment and whether "x = x + 1" is > >> nonsense, with advocates preferring "x := x + 1" as being somehow > >> fundamentally different. It isn't. It's just a notational change, and > >> not even a huge one. (Though I do see the line of argument that it > >> should be "x <- x + 1" or something else that looks like an arro'w.) > > > > > > 'x <- x + 1' already means something as an expression (whether x is less > > than (-x+1). 'x <= x + 1' has the same problem. > > > > But I have used "=>" before,  for left-to-right assignment. (Mostly I use > > ":=") > > In Python it does, yes; I'm talking about the language design > advocates. Some recommend a two-character ASCII notation like "<-" or > "<=", others prefer a single-character symbol eg "?" or "?", but > whatever it is, it will have no meaning in that language other than > assignment. And yes, I can see the value of using an arrow to indicate > assignment... but I don't really see a huge problem with using "=" to > mean assignment, given that people from a mathematical background will > have to grok the entire concept of temporal truth anyway. Whatever > symbol you use, it has to be explained. Its not merely temporal truth but truth vs action and their wanton overloading. In every (natural) language that I know (of)? declarative and imperative moods are distinguished.  It does not require a PhD in English to see that "Please sit down." and "It is raining." differ in mood. Imperative languages after Pascal (specially C and following) use a locution from the one to denote a semantics in the other and make pickle of beginners' brains. --------- ? Except perhaps magic/mystic-speak wherein pronouncing a spell makes the heavens thunder. Maybe I am just too old to have noticed that imperative programming is a paradigm of magic