Fixed point arithmetic in C

Hi, I am a newbie in embedded systems. This is a basic question only, I couldn't understand the two numbers multiplication in fixed point arithmetic. I am reading "Introduction to embedded systems, a cyber physical systems approach "book. I came across this concept in page number 195.

Can you please explain me this? Thanks in advance Attached the link for the book also

Introduction to embedded systems book

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/Jeniefer_Rexon
πŸ“…︎ Mar 26 2020
🚨︎ report
[F#] Dependently typed natural number arithmetic and constraints using fixed-point decimal types. notebooks.azure.com/allis…
πŸ‘︎ 26
πŸ’¬︎
πŸ‘€︎ u/allisterb
πŸ“…︎ Jun 10 2019
🚨︎ report
Dependently typed natural number arithmetic and constraints using fixed-point decimal types. notebooks.azure.com/allis…
πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/allisterb
πŸ“…︎ Jun 10 2019
🚨︎ report
OpenZeppelin Contracts - 6 week roadmap ⚑ Highlights: migration to Solidity v0.6, plans for ERC1155 and fixed-point arithmetic. github.com/OpenZeppelin/o…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/abcoathup
πŸ“…︎ Jan 31 2020
🚨︎ report
Arithmetic Encoding Using Fixed-Point Math preshing.com/20121105/ari…
πŸ‘︎ 97
πŸ’¬︎
πŸ‘€︎ u/redditthinks
πŸ“…︎ Nov 05 2012
🚨︎ report
Optimizing Math Intensive Applications With Fixed-Point Arithmetic drdobbs.com/cpp/optimizin…
πŸ‘︎ 35
πŸ’¬︎
πŸ‘€︎ u/i_solve_riddles
πŸ“…︎ Aug 08 2014
🚨︎ report
[Poll] Support for a Dev Grant funding creation of a Decimal Arithmetic Solidity library (aka Fixed Point Arithmetic)

I'm considering putting in a grant to do a Solidity library for Decimal Arithmetic, sometimes referred to as Fixed Point Arithmetic.

An example of this is the Decimal library in python.

The primary use case that I know of is financial applications but I'm sure there are others.

I would like to be able to submit some basic data on the level of interest and at least a generic description of the application that you would like to use it for.

πŸ‘︎ 26
πŸ’¬︎
πŸ‘€︎ u/pipermerriam
πŸ“…︎ Nov 15 2015
🚨︎ report
How should OpenZeppelin add Fixed Point Arithmetic support? forum.zeppelin.solutions/…
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/martriay
πŸ“…︎ Mar 13 2019
🚨︎ report
Fixed-point Arithmetic in Picolisp the-m6.net/blog/fixed-poi…
πŸ‘︎ 23
πŸ’¬︎
πŸ‘€︎ u/tankfeeder
πŸ“…︎ Sep 07 2016
🚨︎ report
C++14 Fixed Point Arithmetic Library github.com/mizvekov/fp
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/mizvekov
πŸ“…︎ Sep 08 2014
🚨︎ report
Optimizing Math-Intensive Applications with Fixed-Point Arithmetic | March 28, 2008 ddj.com/cpp/207000448
πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/gst
πŸ“…︎ Apr 08 2008
🚨︎ report
Fixed-point arithmetic (Forth) en.literateprograms.org/F…
πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/pointfree
πŸ“…︎ Jan 03 2015
🚨︎ report
Arithmetic Encoding Using Fixed-Point Math preshing.com/20121105/ari…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/prograrticles_bot
πŸ“…︎ Nov 05 2012
🚨︎ report
Fixing floating-point arithmetics with Kotlin blog.frankel.ch/fixing-fl…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/nfrankel
πŸ“…︎ Jul 03 2016
🚨︎ report
Distinct operator for floating point arithmetic

I was wondering, in a purely hypothetical scenario ^(e.g. a hypothetical future Haskell-inspired language, kind of like Idris, or in a world where the next Haskell Report would consciously break backwards compatibility, like Python 2 -> 3) ...

How would you feel about having (+) reserved for only associative additive operations and having a distinct operator for non-associative IEEE floating point operation? For the sake of this post, let's call it (+.); analogously for multiplicaiton.

That would read something like this:

-- instance IEEE Double where
--   ...
-- sqr :: (IEEE c) => c -> c
area = (sqr r) *. 3.14159265359

mega :: Int -> Int
mega n = 1_000_000 * n

So, how do you feel about this?


Some of my thoughts:

In other languages, where floating point and integral types are shoehorned together with automatic upcasting , like x = 3 * 3.4 I see how this distinction might be unergonomic or even inconsistent: x = 3 * 3; y = 2.3 *. 2.1; z = 3 ??? 3.2 -- and plain unnecessary. However, in Haskell we already can't mix the different types: 1 + 3.2 is not valid, and we either have to fromIntegral 1 + 3.2 or 1 + floor 3.2 anyway.

For comparision, we already have (^), (^^) and (**) in Haskell.

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/szpaceSZ
πŸ“…︎ Oct 20 2020
🚨︎ report
Shellmath: Floating-point arithmetic directly in bash

Hello, all!

I'm proud to present shellmath, a decimal calculator written entirely in bash:

https://github.com/clarity20/shellmath

Shellmath is a highly-optimized library of shell functions for floating-point arithmetic.

Shellmath proves you can do floating-point math directly in the shell! It does not call out to any external programs -- calculators, text processors, or anything else!

For skeptics ;-) I've included a demo that calculates e. I've also posted a few words about the methodology, optimization techniques, and a few bells and whistles on the README and in the project wiki.

I eagerly await your feedback!

Be good, and have a great day!

P.S. Thanks to everyone for your feedback. I've committed bug fixes for everything through 11/2/20, and I'm now running shellmath peachy-keen not only on my Windows and Android devices, but on Linux too. Would be happy to have your feedback on this fixed-up version!

πŸ‘︎ 41
πŸ’¬︎
πŸ‘€︎ u/ClarityGuy20
πŸ“…︎ Nov 02 2020
🚨︎ report
The Fundamental Axiom of Floating Point Arithmetic johnbcoughlin.com/posts/f…
πŸ‘︎ 166
πŸ’¬︎
πŸ‘€︎ u/chronicfields
πŸ“…︎ May 08 2020
🚨︎ report
Floating point arithmetic

I've been using Julia for years in my research and just noticed today that simple floating point arithmetic is imprecise. For example, 1.6 -0.66 = 0.9400000000000001. I taught myself Julia and don't come from a CS background, so I was not aware that this was even a thing until I googled it just now.

My question is two-fold:

  1. How serious is this issue? I've been maximizing the likelihood of a state space model which can involve many computations. Should I be worried about my end results?

  2. Are there simple ways to circumvent this problem or is it negligible?

edit: to clarify my results are reported to 3-4 decimals so extreme precision is not required, I'm more concerned with the imprecision compounding over many computations.

edit 2: Thanks for the feedback, I feel better already. I think that I should be fine but it's definitely an interesting quirk which I'm glad I know exists. Learn something new everyday.

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/bear_mkt
πŸ“…︎ Sep 11 2020
🚨︎ report
What is 0.6 - 0.2 ? | Floating Point Arithmetic: Issues and Limitations ... youtube.com/watch?v=Hmm3Q…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/matrickx
πŸ“…︎ Mar 06 2021
🚨︎ report
When I type 0.1 + 0.2 into a calculator I get 0.3. In C, Perl, or JS I'll get precision loss. What optimizations do calculators have for floating point arithmetic?

Question is pretty much in the title. Any help appreciated!

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Trauerkraus
πŸ“…︎ Jan 11 2021
🚨︎ report
The Fundamental Axiom of Floating Point Arithmetic johnbcoughlin.com/posts/f…
πŸ‘︎ 59
πŸ’¬︎
πŸ‘€︎ u/chronicfields
πŸ“…︎ May 08 2020
🚨︎ report
CMPUT 201 notes: "Floating point numbers are more complicated, and it’s very rare to do binary arithmetic with floating point numbers, so we won’t be covering that either."
πŸ‘︎ 22
πŸ’¬︎
πŸ‘€︎ u/UnitedFlatw0rm
πŸ“…︎ Apr 17 2020
🚨︎ report
KS-23 bug is NOT fixed as stated in 12.11.5 patch notes, impacts are still well below aiming point youtube.com/watch?v=6k5DK…
πŸ‘︎ 189
πŸ’¬︎
πŸ‘€︎ u/Totushek123
πŸ“…︎ Oct 12 2021
🚨︎ report
Covid-19: Hawker centres, coffee shops must designate β€˜fixed point’, such as drinks stall, to check patrons’ vaccination status todayonline.com/singapore…
πŸ‘︎ 66
πŸ’¬︎
πŸ‘€︎ u/chailoren
πŸ“…︎ Oct 12 2021
🚨︎ report
Faster floating point arithmetic with Exclusive OR nfrechette.github.io/2019…
πŸ‘︎ 80
πŸ’¬︎
πŸ‘€︎ u/zeno490
πŸ“…︎ Oct 22 2019
🚨︎ report
[decimal] a new high-performance arbitrary-precision decimal floating-point arithmetic package for Go

https://github.com/db47h/decimal

TL;DR This is a port of Go's big.Float to a big.Decimal. Compared to other similar packages out there, the API is identical to math/big, and it uses a decimal representation for the mantissa.

πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/db47h
πŸ“…︎ May 28 2020
🚨︎ report
New tool "Herbie" automatically rewrites arithmetic expressions to minimize floating-point precision errors herbie.uwplse.org/
πŸ‘︎ 2k
πŸ’¬︎
πŸ‘€︎ u/jezeq
πŸ“…︎ Jan 24 2016
🚨︎ report
Floating-point arithmetic illusive?
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/trentseven
πŸ“…︎ Sep 19 2020
🚨︎ report
How was floating point arithmetic handled in older computers/procssors that didn't have FPUs or Math Co-Processors?

My guess is that they used some sort of floating point library that mimicked floating point math using the ALU to process the data. If so I would like to find out how these libraries were implemented.

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/phatboye
πŸ“…︎ Sep 15 2019
🚨︎ report
Would you please help me understand pointers. I just read pointer and array address arithmetic, so I understand the difference between incrementing what a pointer points to and incrementing where a pointer points, assigning arrays to pointers, and indexing. But I've gotten mixed up.

So in this K&R book it explains that you can modify a pointer to point elsewhere (I concur), but the result is undefined if you assign the location of a string constant to a pointer then try to modify the string contents. So how does strcpy work.

void strcpy(char *s, char *t)
{
	while (*s++ = *t++)
		;
}

If s and t are pointers to the address of an array, then I believe it works because s and t point to arrays, not string constants, and increment down t while s = t != '\0'.

I experimented with the pointers here assigning two arrays to two pointers and then used strcpy

main()
{
	char array123[4] = "123";
	char array4[3] = "44";
	char *stringp1 = array123; 
	char *stringp2 = array4;
	strcpy(stringp1, stringp2);
	printf("%s", stringp1);
}

I got errors and took an image of the errors: https://imgur.com/a/tXIKNmZ

So what am I mixing up. I need help.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/superrenzo64
πŸ“…︎ Jan 06 2020
🚨︎ report
Modi Govt Got Arithmetic On Vaccines Utterly Wrong, Very Difficult to Fix the Mess Now:P Chidambaram youtube.com/watch?v=zPuLH…
πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/IronicAlgorithm
πŸ“…︎ May 11 2021
🚨︎ report
What Every Computer Scientist Should Know About Floating-Point Arithmetic (1991) docs.oracle.com/cd/E19957…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/PatientModBot
πŸ“…︎ Jun 29 2020
🚨︎ report
Can you seriously do random arithmetic in math as long as you do it both sides?

I think this is what thats always confused me!

I'm watching this dude on youtube: https://snipboard.io/gBTive.jpg

And it seems like he's doing every solution differently so is it fair to say you can randomly add, subtract, divide and multiply in math/algebra as long as you do it do the other side?

I dont understand how that wont change the answer mid way even tho we're doing it to the both sides, that seems magic to me honestly!

And if that really is the case how will i solve such questions? Its almost people who do such things know the answer in advance and then do reverse engineering and what not.

πŸ‘︎ 26
πŸ’¬︎
πŸ“…︎ Dec 21 2021
🚨︎ report
Library for fixed precision arithmetic github.com/lumihq/purescr…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/paf31
πŸ“…︎ Feb 16 2018
🚨︎ report
Grading scheme, arithmetic progression of points, anyone do anything like this?

Based on our 50% can still pass discussions I've been thinking of a new grading scheme that is arithmetic in form. And I wonder, does anyone else do anything like this who can offer any feedback on how it worked?

Let me explain. In my domain work is progressive, meaning students cannot just pass summative tests (there are none per se). Rather the grading is formative, ongoing, one thing built upon the last with a good deal of reflexivity built in.

Therefore upon consideration, it seemed sort of defeatist to the learning purpose if students could skip say every other project (one a week normally in addition to some other longer projects) and still pass. By this skipping I would suggest that progression and building upon previous work would not take place, or barely take place.

So I'm thinking of using a pseudo arithmetic sequence of points deducted for subsequent non fulfillment of assignments. For example missing one assignments results in a 3 point deduction; missing two results in a -3 for the first and -5 for the second then something like -8 for the third. In this way missing three projects would not be 9 points deducted from the final grade (3 and 3 and 3) but 16 points off the final grade (3 and 5 and 8).

The points off rise with each subsequent miss. This seems to provide an adjustment for the inflated grading policy we now have. Students now know that under the current school policy they can skip a bunch of projects and still pass. This I hope would stem some of that attitude. We do not seem to have any statement barring such a scheme, and if clearly presented up front and in an ongoing manner it seems clear and transparent.

Thoughts?

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/gutfounderedgal
πŸ“…︎ Jul 05 2019
🚨︎ report
What Every Computer Scientist Should Know About Floating-Point Arithmetic (1991) docs.oracle.com/cd/E19957…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/qznc_bot2
πŸ“…︎ Jun 29 2020
🚨︎ report
Posit Arithmetic VS Floating-Point (IEEE 754) Arithmetic

What do you guys think about posit arithmetic standard ?

Why does posits beat floats at both dynamic range and accuracy ?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/promach
πŸ“…︎ Mar 23 2019
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.