A list of puns related to "Fixed point arithmetic"
Hi, I am a newbie in embedded systems. This is a basic question only, I couldn't understand the two numbers multiplication in fixed point arithmetic. I am reading "Introduction to embedded systems, a cyber physical systems approach "book. I came across this concept in page number 195.
Can you please explain me this? Thanks in advance Attached the link for the book also
I'm considering putting in a grant to do a Solidity library for Decimal Arithmetic, sometimes referred to as Fixed Point Arithmetic.
An example of this is the Decimal library in python.
The primary use case that I know of is financial applications but I'm sure there are others.
I would like to be able to submit some basic data on the level of interest and at least a generic description of the application that you would like to use it for.
I was wondering, in a purely hypothetical scenario ^(e.g. a hypothetical future Haskell-inspired language, kind of like Idris, or in a world where the next Haskell Report would consciously break backwards compatibility, like Python 2 -> 3) ...
How would you feel about having (+)
reserved for only associative additive operations and having a distinct operator for non-associative IEEE floating point operation? For the sake of this post, let's call it (+.)
; analogously for multiplicaiton.
That would read something like this:
-- instance IEEE Double where
-- ...
-- sqr :: (IEEE c) => c -> c
area = (sqr r) *. 3.14159265359
mega :: Int -> Int
mega n = 1_000_000 * n
So, how do you feel about this?
Some of my thoughts:
In other languages, where floating point and integral types are shoehorned together with automatic upcasting , like x = 3 * 3.4
I see how this distinction might be unergonomic or even inconsistent: x = 3 * 3; y = 2.3 *. 2.1; z = 3 ??? 3.2
-- and plain unnecessary. However, in Haskell we already can't mix the different types: 1 + 3.2
is not valid, and we either have to fromIntegral 1 + 3.2
or 1 + floor 3.2
anyway.
For comparision, we already have (^)
, (^^)
and (**)
in Haskell.
Hello, all!
I'm proud to present shellmath, a decimal calculator written entirely in bash:
https://github.com/clarity20/shellmath
Shellmath is a highly-optimized library of shell functions for floating-point arithmetic.
Shellmath proves you can do floating-point math directly in the shell! It does not call out to any external programs -- calculators, text processors, or anything else!
For skeptics ;-) I've included a demo that calculates e. I've also posted a few words about the methodology, optimization techniques, and a few bells and whistles on the README and in the project wiki.
I eagerly await your feedback!
Be good, and have a great day!
P.S. Thanks to everyone for your feedback. I've committed bug fixes for everything through 11/2/20, and I'm now running shellmath peachy-keen not only on my Windows and Android devices, but on Linux too. Would be happy to have your feedback on this fixed-up version!
I've been using Julia for years in my research and just noticed today that simple floating point arithmetic is imprecise. For example, 1.6 -0.66 = 0.9400000000000001. I taught myself Julia and don't come from a CS background, so I was not aware that this was even a thing until I googled it just now.
My question is two-fold:
How serious is this issue? I've been maximizing the likelihood of a state space model which can involve many computations. Should I be worried about my end results?
Are there simple ways to circumvent this problem or is it negligible?
edit: to clarify my results are reported to 3-4 decimals so extreme precision is not required, I'm more concerned with the imprecision compounding over many computations.
edit 2: Thanks for the feedback, I feel better already. I think that I should be fine but it's definitely an interesting quirk which I'm glad I know exists. Learn something new everyday.
Question is pretty much in the title. Any help appreciated!
https://github.com/db47h/decimal
TL;DR This is a port of Go's big.Float to a big.Decimal. Compared to other similar packages out there, the API is identical to math/big, and it uses a decimal representation for the mantissa.
My guess is that they used some sort of floating point library that mimicked floating point math using the ALU to process the data. If so I would like to find out how these libraries were implemented.
So in this K&R book it explains that you can modify a pointer to point elsewhere (I concur), but the result is undefined if you assign the location of a string constant to a pointer then try to modify the string contents. So how does strcpy work.
void strcpy(char *s, char *t)
{
while (*s++ = *t++)
;
}
If s and t are pointers to the address of an array, then I believe it works because s and t point to arrays, not string constants, and increment down t while s = t != '\0'.
I experimented with the pointers here assigning two arrays to two pointers and then used strcpy
main()
{
char array123[4] = "123";
char array4[3] = "44";
char *stringp1 = array123;
char *stringp2 = array4;
strcpy(stringp1, stringp2);
printf("%s", stringp1);
}
I got errors and took an image of the errors: https://imgur.com/a/tXIKNmZ
So what am I mixing up. I need help.
I think this is what thats always confused me!
I'm watching this dude on youtube: https://snipboard.io/gBTive.jpg
And it seems like he's doing every solution differently so is it fair to say you can randomly add, subtract, divide and multiply in math/algebra as long as you do it do the other side?
I dont understand how that wont change the answer mid way even tho we're doing it to the both sides, that seems magic to me honestly!
And if that really is the case how will i solve such questions? Its almost people who do such things know the answer in advance and then do reverse engineering and what not.
Based on our 50% can still pass discussions I've been thinking of a new grading scheme that is arithmetic in form. And I wonder, does anyone else do anything like this who can offer any feedback on how it worked?
Let me explain. In my domain work is progressive, meaning students cannot just pass summative tests (there are none per se). Rather the grading is formative, ongoing, one thing built upon the last with a good deal of reflexivity built in.
Therefore upon consideration, it seemed sort of defeatist to the learning purpose if students could skip say every other project (one a week normally in addition to some other longer projects) and still pass. By this skipping I would suggest that progression and building upon previous work would not take place, or barely take place.
So I'm thinking of using a pseudo arithmetic sequence of points deducted for subsequent non fulfillment of assignments. For example missing one assignments results in a 3 point deduction; missing two results in a -3 for the first and -5 for the second then something like -8 for the third. In this way missing three projects would not be 9 points deducted from the final grade (3 and 3 and 3) but 16 points off the final grade (3 and 5 and 8).
The points off rise with each subsequent miss. This seems to provide an adjustment for the inflated grading policy we now have. Students now know that under the current school policy they can skip a bunch of projects and still pass. This I hope would stem some of that attitude. We do not seem to have any statement barring such a scheme, and if clearly presented up front and in an ongoing manner it seems clear and transparent.
Thoughts?
What do you guys think about posit arithmetic standard ?
Why does posits beat floats at both dynamic range and accuracy ?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.