I’ve wondered why programming languages don’t include accurate fractions as part of their standard utils. I don’t mind calling dc, but I wish I didn’t need to write a bash script to pipe the output of dc into my program.
Performance penalty I would imagine. You would have to do many more steps at the processor level to calculate fractions than floats. The languages more suited toward math do have them as someone else mentioned, but the others probably can’t justify the extra computational expense for the little benefit it would have, also I’d bet there are already open source libraries for all the popular languages of you really need a fraction.
It would be pretty easy to make a fraction class if you really wanted to. But I doubt it would result in much difference in the precision of calculations since the result would still be limited to a float value (edit: I guess I’m probably wrong on that but reducing a fraction would be less trivial I think?)
I’ve wondered why programming languages don’t include accurate fractions as part of their standard utils. I don’t mind calling dc, but I wish I didn’t need to write a bash script to pipe the output of dc into my program.
Many do. Matlab, Julia and Smalltalk are the ones I know
Performance penalty I would imagine. You would have to do many more steps at the processor level to calculate fractions than floats. The languages more suited toward math do have them as someone else mentioned, but the others probably can’t justify the extra computational expense for the little benefit it would have, also I’d bet there are already open source libraries for all the popular languages of you really need a fraction.
It would be pretty easy to make a fraction class if you really wanted to. But I doubt it would result in much difference in the precision of calculations since the result would still be limited to a float value (edit: I guess I’m probably wrong on that but reducing a fraction would be less trivial I think?)